id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
317551 | https://en.wikipedia.org/wiki/UCPH%20Department%20of%20Computer%20Science | UCPH Department of Computer Science | The UCPH Department of Computer Science () is a department in the Faculty of Science at the University of Copenhagen (UCPH). It is the longest established department of Computer Science in Denmark and was founded in 1970 by Turing Award winner Peter Naur. As of 2021, it employs 82 academic staff, 126 research staff and 38 support staff. It is consistently ranked the top Computer Science department in the Nordic countries, and in 2017 was placed 9th worldwide by the Academic Ranking of World Universities.
History
DIKU has its roots at the Institute for Mathematical Sciences, where in 1963, the first computer was bought.
In 1969, Peter Naur became the first professor in Computer Science at the University of Copenhagen, and in 1970, DIKU was officially established its own department.
Research
As of 2021, the department is home to 82 academic staff, 126 research staff and 38 support staff. Research is organised into seven research sections:
The Algorithms and Complexity Section, headed by Mikkel Thorup, who conduct basic algorithms research, as well as researc on data structures and computational complexity
The Human‐Centered Computing Section, headed by Kasper Hornbæk, who research human-computer interaction, computer-supported cooperative work, as well as health informatics
The Image Section, headed by Kim Steenstrup Pedersen, who work on image processing including medical image processing, computer vision, physics based animation and robotics.
The Machine Learning Section, headed by Christina Lioma, researching theoretical machine learning, information retrieval, and machine learning in biology
The Natural Language Processing Section, headed by Isabelle Augenstein, who conduct research on core natural language processing, natural language understanding, computational linguistics, as well as multimodal learning
The Programming Languages and Theory of Computation section, headed by Ken Friis Larsen, researching programming languages, theory of computation, computer security, and approaches to financial transparency
The Software, Data, People & Society Section, headed by Thomas Troels Hildebrandt, who work on decentralised systems, data management systems, and process modelling
Teaching
The department offers programmes at BSc as well as MSc level, both in core computer science and in interdisciplinary subjects. Bachelor's programmes are 3-year programmes and mostly taught in Danish, whereas Master's programmes are 2-year programmes and taught in English. In 2020, DIKU enrolled 610 new Bachelor's students and 136 new Master's students.
As of 2021, DIKU offers the following study programmes:
Bachelor of Science (BSc) in Computer Science
Bachelor of Science (BSc) in Machine Learning and Data Science
Bachelor of Science (BSc) in Computer Science and Economy
Bachelor of Science (BSc) in Communication and IT
Bachelor of Science (BSc) in Health and IT
Master of Science (MSc) in Computer Science
Part-time Master of Science (MSc) in Computer Science
Master of Science (MSc) in IT and Cognition
Master of Science (MSc) in Communication and IT
In addition, the department awards the research degree Doctor of Philosophy (PhD). PhD students are enrolled in the Faculty of Science's Doctoral School for a typical study period of between three and four years.
Location
DIKU is based at University Park in Copenhagen, part of the university's North Campus. Its building complex comprises the former Department of Anatomy. The building was completed in 1942 to design by Kaj Gottlob.
The Human-Centered Computing Section is located in Sigurdsgade, close to the North Campus.
Student life
An important social event is the DIKU revue which is held each year in June. The DIKU revue is always in competition with the physics revue and never misses an opportunity to computer-animate the complete and utter destruction of the physics institute at the H. C. Ørsted Institute.
As something unique among the institutes of Copenhagen University, the DIKU cantine is entirely student driven and open 24 hours. It is the natural hub for all social events on DIKU.
The two largest social events are the DIKU revue and the Julefrokost (Christmas lunch) of the cantine.
Notable faculty
Corinna Cortes, who co-developed the highly influential supervised machine learning method support vector machines, has been an adjunct professor at the department since 2011.
Mikkel Thorup, best known for his work on shortest path problem in undirected graphs, has been a professor at the department since 2013.
Kasper Hornbæk, who won a SIGCHI Lifetime Achievement Award for his work on usability in human-computer interaction, has been a professor at the department since 2014.
Notable alumni
Peter Naur, a Turing Award recipient, was a professor at the department between 1969 and 1998.
Per Brinch Hansen, a IEEE Computer Pioneer Award winner, was a professor at the department between 1984 and 1987.
Mads Tofte, the first managing director of the IT University of Copenhagen and co-developer of the Standard ML programming language, who graduated with a MSc in Computer Science and Mathematics in 1984.
Michael Seifert, a Danish computer programmer who developed the popular multiplayer text-based role-playing game DikuMud, was a BSc then MSc student at the department from 1990 to 1996.
Miscellaneous
The domain diku.dk was registered on October 29, 1987, and was one of the first .dk domain names to be registered.
The popular DikuMUD codebase was developed at DIKU in March 1990, and derives its name from the institute.
In the 1994 Danish thriller Nattevagten (Nightwatch) directed by Ole Bornedal, the main entrance and stairwell of the institute was used as a main location.
References
External links
DIKU homepage
DIKU's history
University of Copenhagen
Computer science departments
Educational institutions established in 1970
Educational institutions in Denmark |
620604 | https://en.wikipedia.org/wiki/Text%20mode | Text mode | Text mode is a computer display mode in which content is internally represented on a computer screen in terms of characters rather than individual pixels. Typically, the screen consists of a uniform rectangular grid of character cells, each of which contains one of the characters of a character set; at the same time, contrasted to all points addressable (APA) mode or other kinds of computer graphics modes.
Text mode applications communicate with the user by using command-line interfaces and text user interfaces. Many character sets used in text mode applications also contain a limited set of predefined semi-graphical characters usable for drawing boxes and other rudimentary graphics, which can be used to highlight the content or to simulate widget or control interface objects found in GUI programs. A typical example is the IBM code page 437 character set.
An important characteristic of text mode programs is that they assume monospace fonts, where every character has the same width on screen, which allows them to easily maintain the vertical alignment when displaying semi-graphical characters. This was an analogy of early mechanical printers which had fixed pitch. This way, the output seen on the screen could be sent directly to the printer maintaining the same format.
Depending on the environment, the screen buffer can be directly addressable. Programs that display output on remote video terminals must issue special control sequences to manipulate the screen buffer. The most popular standards for such control sequences are ANSI and VT100.
Programs accessing the screen buffer through control sequences may lose synchronization with the actual display so that many text mode programs have a redisplay everything command, often associated with the Ctrl-L key combination.
History
Text mode video rendering came to prominence in the early 1970s, when video-oriented text terminals started to replace teleprinters in the interactive use of computers.
Benefits
The advantages of text modes as compared to graphics modes include lower memory consumption and faster screen manipulation. At the time text terminals were beginning to replace teleprinters in the 1970s, the extremely high cost of random access memory in that period made it exorbitantly expensive to install enough memory for a computer to simultaneously store the current value of every pixel on a screen, to form what would now be called a framebuffer. Early framebuffers were standalone devices which cost thousands of dollars, in addition to the expense of the advanced high-resolution displays to which they were connected. For applications that required simple line graphics but for which the expense of a framebuffer could not be justified, vector displays were a popular workaround. But there were many computer applications (e.g., data entry into a database) for which all that was required was the ability to render ordinary text in a quick and cost-effective fashion to a cathode ray tube.
Text mode avoids the problem of expensive memory by having dedicated display hardware re-render each line of text from characters into pixels with each scan of the screen by the cathode ray. In turn, the display hardware needs only enough memory to store the pixels equivalent to one line of text (or even less) at a time. Thus, the computer's screen buffer only stores and knows about the underlying text characters (hence the name "text mode") and the only location where the actual pixels representing those characters exist as a single unified image is the screen itself, as viewed by the user (thanks to the phenomenon of persistence of vision).
For example, a screen buffer sufficient to hold a standard grid of 80 by 25 characters requires at least 2,000 bytes. Assuming a monochrome display, 8 bits per byte, and a standard size of 8 times 8 bits for each character, a framebuffer large enough to hold every pixel on the resulting screen would require at least 128,000 bits, 16,000 bytes, or just under 16 kilobytes. By the standards of modern computers, these may seem like trivial amounts of memory, but to put them in context, the original Apple II was released in 1977 with only four kilobytes of memory and a price of $1,300 in U.S. dollars (at a time when the minimum wage in the United States was only $2.30 per hour). Furthermore, from a business perspective, the business case for text terminals made no sense unless they could be produced and operated more cheaply than the paper-hungry teleprinters they were supposed to replace.
Another advantage of text mode is that it has relatively low bandwidth requirements in remote terminal use. Thus, a text mode remote terminal can necessarily update the screen much faster than a graphics mode remote terminal linked to the same amount of bandwidth (and in turn will seem more responsive), since the remote server may only need to transmit a few dozen bytes for each screen update in text mode, as opposed to complex raster graphics remote procedure calls that may require the transmission and rendering of entire bitmaps.
User-defined characters
The border between text mode and graphical programs can sometimes be fuzzy, especially on the PC's VGA hardware, because many later text mode programs tried to push the model to the extreme by playing with the video controller. For example, they redefined the character set in order to create custom semi-graphical characters, or even created the appearance of a graphical mouse pointer by redefining the appearance of the characters over which the mouse pointer was shown at a given time.
Text mode rendering with user-defined characters has also been useful for 2D computer and video games because the game screen can be manipulated much faster than with pixel-oriented rendering.
Technical basis
A video controller implementing a text mode usually uses two distinct areas of memory. Character memory or a pattern table contains a raster font in use, where each character is represented by a dot matrix (a matrix of bits), so the character memory could be considered as a three-dimensional bit array. Display matrix (a text buffer, screen buffer, or nametable) tracks which character is in each cell. In the simple case the display matrix can be just a matrix of code points (so named character pointer table), but it usually stores for each character position not only a code, but also attributes.
In the case of raster scan output, which is the most common for computer monitors, the corresponding video signal is made by the character generator, a special electronic unit similar to devices with the same name used in video technology. The video controller has two registers: scan line counter and dot counter, serving as coordinates in the screen dot matrix. Each of them must be divided by corresponding glyph size to obtain an index in the display matrix; the remainder is an index in glyph matrix. If glyph size equals to 2n, then it is possible just to use n low bits of a binary register as an index in glyph matrix, and the rest of bits as an index in the display matrix — see the scheme.
The character memory resides in a read-only memory in some systems. Other systems allow the use of RAM for this purpose, making it possible to redefine the typeface and even the character set for application-specific purposes. The use of RAM-based characters also facilitates some special techniques, such as the implementation of a pixel-graphics frame buffer by reserving some characters for a bitmap and writing pixels directly to their corresponding character memory. In some historical graphics chips, including the TMS9918, the MOS Technology VIC, and the Game Boy graphics hardware, this was actually the canonical way of doing pixel graphics.
Text modes often assign attributes to the displayed characters. For example, the VT100 terminal allows each character to be underlined, brightened, blinking or inverse. Color-supporting devices usually allow the color of each character, and often the background color as well, to be selected from a limited palette of colors. These attributes can either coexist with the character indices or use a different memory area called color memory or attribute memory.
Some text mode implementations also have the concept of line attributes. For example, the VT100-compatible line of text terminals supports the doubling of the width and height of the characters on individual text lines.
PC common text modes
Depending on the graphics adapter used, a variety of text modes are available on IBM PC compatible computers. They are listed on the table below:
MDA text could be emphasized with bright, underline, reverse and blinking attributes.
Video cards in general are backward compatible, i.e. EGA supports all MDA and CGA modes, VGA supports MDA, CGA and EGA modes.
By far the most common text mode used in DOS environments, and initial Windows consoles, is the default 80 columns by 25 rows, or 80×25, with 16 colors. This mode was available on practically all IBM and compatible personal computers. Several programs, such as terminal emulators, used only 80×24 for the main display and reserved the bottom row for a status bar.
Two other VGA text modes, 80×43 and 80×50, exist but were very rarely used. The 40-column text modes were never very popular outside games and other applications designed for compatibility with television monitors, and were used only for demonstration purposes or with very old hardware.
Character sizes and graphical resolutions for the extended VESA-compatible Super VGA text modes are manufacturer-dependent. Also on these display adapters, available colors can be halved from 16 to 8 when a second customized character set is employed (giving a total repertoire of 512 —instead the common 256— different graphic characters simultaneously displayed on the screen).
Some cards (e.g. S3) supported custom very large text modes, like 100×37 or even 160×120. In Linux systems, a program called SVGATextMode is often used with SVGA cards to set up very large console text modes, such as for use with split-screen terminal multiplexers.
Modern usage
Many modern programs with a graphical interface simulate the display style of text mode programs, notably when it is important to preserve the vertical alignment of text, e.g., during computer programming. There exist also software components to emulate text mode, such as terminal emulators or command line consoles. In Microsoft Windows, the Win32 console usually opens in emulated, graphical window mode. It can be switched to full screen, true text mode and vice versa by pressing the Alt and Enter keys together. This is no longer supported by the WDDM display drivers introduced with Windows Vista.
Linux virtual console operates in text mode. Most Linux distributions support several virtual console screens, accessed by pressing Ctrl, Alt and a function key together.
The AAlib open source library provides programs and routines that specialize in translating standard image and video files, such as PNG and WMV, and displaying them as a collection of ASCII characters. This enables a rudimentary viewing of graphics files on text mode systems, and on text mode web browsers such as Lynx.
See also
Text-based user interface
Teletext
Text semigraphics
ASCII art
Twin
Hardware code page
VGA text mode VGA-compatible text mode details
References
External links
High-Resolution console on Linux
Further reading
(NB. For example: Signetics 2513 MOS ROM.)
Display technology |
42031684 | https://en.wikipedia.org/wiki/Nokia%20X%20family | Nokia X family | The Nokia X family was a range of budget smartphones that was produced and marketed by Microsoft Mobile, originally introduced in February 2014 by Nokia. The smartphones run on the Nokia X platform, a Linux-based operating system which was a fork of Android. Nokia X is also known generally as the Nokia Normandy. It is regarded as Nokia's first Android device
during the company's Microsoft partnership and was in the process of selling its mobile phone business to Microsoft, which eventually happened two months later.
The Nokia X devices heavily resemble the Asha phones, and also contain some Lumia features. They have a single "back" button like the Asha 50x and 230. A "home" button was added to the X2 series when they were released in June 2014. They are primarily targeted towards emerging markets, and never made its way to Western Europe or North America.
Nokia CEO Stephen Elop called it the Nokia X family during an announcement, possibly to distinguish it from the unrelated Xseries that ran from 2009 to 2011.
In July 2014, Microsoft Mobile announced the end of the X range after just five months (as well as Asha and Series 40) in favor of solely producing and encouraging the use of Windows Phone products.
Background
Despite choosing the Windows Phone operating system for its Lumia series of smartphones, Nokia had experimented with the Android platform in the past. Images of a Nokia N9 running Android 2.3 were leaked in 2011. They were believed to be genuine, as Steven Elop mentioned Nokia had considered using Android at one time.
2013 events
On 13 September 2013, the New York Times writer Nick Wingfield revealed that Nokia had been testing the Google Android operating system on its Lumia hardware. Another project, known as "Asha on Linux", used a forked version of Android without Google services.
The Asha series previously ran the Java-based Series 40 and Asha platforms. These were not as functional as a similarly priced low-end Android handset, a price range that Windows was not able to provide Windows Phones in. Meltemi (operating system), a Linux-based operating system designed to replace Series 40, had been scrapped by the company.
The Chinese technology site CTechnology revealed that despite the announced merge of Nokia with Microsoft, development of the Asha on Linux project was continuing until November 2013 and 10,000 prototype units had been manufactured by Foxconn, containing a Qualcomm Snapdragon 200 8225Q chip.
A report by Tom Warren from The Verge on 11 December 2013 showed an Asha-like device, codenamed "Normandy". He said that "despite the finalisation of the acquisition, development of the device is continuing." As of late January 2014 the deal stands as not finalized because of scrutiny from Chinese regulators.) AllThingsD suggested that Microsoft may not stop development of the device.
A 14 December 2013 report by CTechnology claimed that the device development had been halted, along with an Android-based Snapdragon 400 tablet. The two projects were to have been created by Nokia's CTO division, which Microsoft did not acquire, with Peter Skillman, the head of UX Design, at the helm of the UI design. The report said that wearable devices were the new focus of the CTO division.
A further leak by @evleaks showed a press image with several colour options for the phone.
According to NokiaPowerUser, the device was dual-SIM with a display, the model number was RM-980, and it had a 640×360 resolution. In a second report, they suggested the device may be a member of the Asha range, as the development team was headed by Egil Kvaleberg (from Smarterphone), and the UI led by Peter Skillman (who worked on the Asha Platform's Swipe UI).
A tweet by @evleaks on 31 December 2013 stated that "The reports of Normandy's death have been greatly exaggerated".
2014 events
A leak on the ITHome technology website showed a blurred image of the phone, and the app drawer of its UI in operation, confirming it to be a dual-SIM device. However, no Nokia logos were found on the device.
@evleaks later posted screenshots of the UI, showing the lock screen and Skype in action.
The device later showed up on the AnTuTu benchmark software as Nokia A110, with KitKat 4.4.1, a 5MP camera and an 854 x 480 display.
Two new photos of the Engineering prototype were leaked in January 2014. One shows a different app launcher than previous photos, suggesting a placeholder.
On 13 January 2014, a press photo showing the tile-like UI of the home screen was leaked, and was accompanied by a screenshot of the Asha platform's Fastlane-style notification centre the next day.
According to Eldar Murtazin, Microsoft was not keen on the idea, saying there were "too many politics" around the project. He claimed it would have to be released in February, before the acquisition of Nokia was completed, if at all. Another source suggested Microsoft would use the device as a trojan horse to increase Windows Phone adoption.
The phone (with the model code RM-980) was certified by the Indonesian authorities on 21 January 2014, suggesting a close launch date.
On 23 January 2014, Nokia sent out invitations to its press event at Mobile World Congress on 24 February 2014, where the device would be unveiled, if it wasn't cancelled.
@evleaks later tweeted that the name of the phone is Nokia X.
A few days later, the specifications were leaked by @evleaks. The device had a dual core Snapdragon processor, 512 MB of RAM, 4 GB internal storage memory, a 1500 MAh battery, Nokia Store and third-party app stores, confirming its placement into the low-end market segment.
On 30 January 2014, the French website nowhereelse.fr released more photos of the device, showing its form factor and rear view for the first time.
According to GoAndroid, an anonymous Senior Nokia Executive Officer in India revealed that the device would debut in India in March 2014 under the Asha line.
NokiaPowerUser later revealed that the phone gained certification in Thailand and Malaysia.
The Wall Street Journal's sources confirmed that Nokia was going to reveal the device at the MWC in Barcelona at the end of February 2014.
Reports from Artesyn Technologies and tech.qq said Nokia X is the first of several Android devices from Nokia, including high-end models. These additional devices, one named Nokia XX, would be released during May or June 2014, and were claimed to be out of beta and would possibly receive FCC certification.
Nokia's social media accounts on Twitter and Facebook had their colour changed to green, which was suggested by WPCentral to be a veiled reference to the Android operating system.
@evleaks confirmed that the phone was to be called Nokia X. Rumours of devices being sent to developers in India were published at GSMArena.
As the release date grew nearer, teaser videos on Nokia's YouTube and Vine accounts under the hashtag #GreenDuck were released, as well as teaser images, such as a treasure map with an X marking the spot on the Sina Weibo microblog in China. More images of the interface also surfaced, showing the finalised product for the first time.
On 18 February 2014, the Hungarian technology website tech2.hu claimed the device was under mass production at Nokia's Komárom plant in Hungary.
At a pre-MWC event on 23 February 2014, Microsoft VP for the Windows Phone platform Joe Belfiore was asked about how the company would feel in the event that Nokia released an Android phone. His response was as follows:
Unveiling
First generation
The phone was unveiled by the Nokia CEO at Mobile World Congress on 24 February 2014. Two variants, the Nokia X and the Nokia X+ were released, with the Nokia X+ having 768 MB RAM as opposed to the 512 MB RAM reported by leaks, as well as a microSD card included in the box. The phone also contained the Lumia-inspired UI design, in addition to the Nokia suite of mobile applications as previously leaked.
A third phone, the Nokia XL, was released, with a larger screen, front-facing camera, rear flash and greater battery life. The XL featured 768 MB of RAM and a Qualcomm Snapdragon S4 Play chipset with a dual-core 1.0 GHz Cortex-A5 CPU.
Nokia X2
On 24 June 2014, Microsoft launched the Nokia X2, which featured 1 GB of RAM and a Qualcomm Snapdragon 200 chipset with a dual-core 1.2 GHz Cortex-A7 CPU. It was launched with an official price of €99 (US$135; £80). It also had single and dual-SIM options.
Nokia XL 4G
The Nokia XL 4G was released in China in July 2014. It featured a 1.2 GHz quad-core CPU over the 1.0 GHz dual-core CPU in the original Nokia XL, LTE support of LTE bands, 1 GB of RAM over the original 768 MB and a lower weight.
Aftermath
In an interview with Forbes, former HMD Global CEO Arto Nummela stated that analysis showed that the Nokia X series became surprisingly popular with users of high end Samsung and Apple smartphone devices, despite the fact that it was a mid to low end device family.
In May 2018, HMD global launched a new Android phone Nokia X6, but it is not officially classified as new member of Nokia Xseries or X family.
Model comparison
The source of information is from the developer.nokia.com and the website of Microsoft mobile China.
See also
Nokia N1 – Nokia's 2014 Android tablet
Fire OS – the Android Open Source Project derivative by Amazon.com
Nokia N9 – Nokia's previous product of Linux Project (MeeGo)
HMD Global – the company behind the current range of Nokia smartphones (2017–present)
Microsoft Surface Duo – Microsoft's next product of Android device
References
External links
Smartphones
Nokia phones by series
Microsoft hardware |
6000666 | https://en.wikipedia.org/wiki/SugarSync | SugarSync | SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing from a variety of operating systems, such as Android, iOS, Mac OS X, and Windows devices. For Linux, only a discontinued unofficial third-party client is available.
Overview
The SugarSync program automatically refreshes its sync by constantly monitoring changes to files—additions, deletions, edits—and syncs these changes with the SugarSync servers. Any other linked devices then also sync with the SugarSync servers. Deleted files are archived in a "Deleted Files" folder. In the event the local sync folder is stored on a device which later becomes unavailable (secondary hard drive failure, etc.) the SugarSync program will interpret this event as if the user had purposely deleted the entire synchronization folder, resulting in deletion of all files from the user's storage account. Due to this limitation, it is best to only store the local synchronization folder on the boot drive. Files deleted by the user are not actually removed from SugarSync servers until the user does so manually; however, recovery of a larger nested folder structure may be difficult.
Originally offering a free 5 GB plan and several paid plans, the company transitioned to a paid-only model on February 8, 2014. Under the new model, the company offered temporary promotional pricing and encouraged subscribers to use subscription auto-renewal. However, the auto-renewal takes place at the non-promotional rate and no refunds are allowed through the new model. Moreover, if the account cannot be auto-renewed due to expiration of a credit card or some other change in payment terms, the account is automatically cancelled and the data deleted from the company's servers at the same time the account is cancelled. If emailed warnings are not received, SugarSync will delete a company's entire cloud-computing storage account. Furthermore, for the general user, no renewal reminders are sent, and once the account renews, no refunds are provided.
Company history
SugarSync was born out of a company named Sharpcast, which was incorporated in 2004 by Gibu Thomas (CEO) and Ben Strong (Chief technical officer). In 2006, Sharpcast unveiled Sharpcast Photos, a tool for synchronizing images between multiple devices including PCs and mobile phones. Both founders left the company in November 2008. In December 2008, Laura Yecies was appointed as the CEO. Yecies and her team re-focused the company and renamed it SugarSync, Inc. in 2009. The company was headquartered in San Mateo, California.
In March 2013, Mike Grossman took over as CEO. In his self-introductory blog he promised to focus the business on mobile, sharing and collaboration, and enhancing the sync and mirrored capabilities of the product. The blog post was inundated with requests for a Linux client such that the top Google result for searches relating to SugarSync on Linux returned Mr Grossman's introductory message.
Sugarsync was acquired by J2 Global in March 2015.
Product history
The company's first product was Sharpcast Photos, software designed to make it easier for people to view their photos on multiple devices and share them via the Internet.
Sharpcast Photos was shut down at the end of 2009. Users were given the option to migrate to the SugarSync service or retrieve their photos.
In June 2013 Samir Mehta posted, on the Introductory Blog post of SugarSync's new CEO, that SugarSync were "in the process of evaluating a SugarSync Linux app". As of January 2015, no further news has been posted about a Linux client for SugarSync.
In December 2013, SugarSync announced that they would be discontinuing their free 5 GB plan and transitioning to a paid-only service by February 2014.
API and third-party addons
In March 2010, SugarSync unveiled an API. As a result, there are several unofficial SugarSync addons and applications available. These addons come both in the form of web services and browser extensions and desktop applications such as SugarSync Linux desktop client (now discontinued) by Mark Willis.
See also
Comparison of file synchronization software
Comparison of online backup services
References
External links
Data synchronization
Software companies based in California
Companies based in San Mateo, California
Software companies of the United States |
5906977 | https://en.wikipedia.org/wiki/Microtek | Microtek | Microtek International Inc. () is a Taiwan-based multinational manufacturer of digital imaging products and other consumer electronics. It produces imaging equipment for medical, biological and industrial fields. It occupies 20 percent of the global imaging market and holds 450 patents worldwide.
It is known for its scanner brands ScanMaker and ArtixScan. The company launched the world's first halftone optical film scanner in 1984, the world's first desktop halftone scanner in 1986, and the world's first color scanner in 1989. It has subsidiaries in Shanghai, Tokyo, Singapore and Rotterdam. It expanded its product lines into the manufacturing of LCD monitors, LCD projectors and digital cameras.
History
1980-1985: Founding and incorporation
In 1979, the Taiwanese government launched the Hsinchu Science and Industrial Park (HSIP) as a vision of Shu Shien-Siu to emulate Silicon Valley and to lure back overseas Taiwanese with their experience and knowledge in engineering and technology fields. Initially there were 14 companies, the first was Wang Computer (王氏電腦), by 2010 only six of the original pioneers remained: United Microelectronics Corporation (聯電), Microtek International, Inc. (全友), Quartz Frequency Technology (頻率), Tecom (東訊), Sino-American Silicon Products Inc. (中美矽晶) and Flow Asia Corporation (福祿遠東).
Microtek (Microelectronics Technology) was co-founded in HSIP in 1980 by five Californian Taiwanese, three were colleagues who had worked at Xerox Bobo Wang (王渤渤), Robert Hsieh (謝志鴻), Carter Tseng (曾憲章) and two were colleagues from the University of Southern California, Benny Hsu (許正勳) and Hu Chung-hsing (胡忠信). They decided to set up root after Hsu was invited by HSIP Manager Dr. Irving Ho (何宜慈). In September 1983, the Allied Association for Science Parks Industries (台灣科學園區同業公會 abbr. 竹科) was established and Hsu was elected to be its first Chairman.
Microtek first entered the industry in 1983, when scanners were little more than expensive tools for hobbyists. In 1984, it introduced the MS-300A, a desktop halftone scanner. At about the same time, the company realized a need for scanning software for mainstream users and developed EyeStar, the world’s first scanning software application. EyeStar made desktop scanning a functional reality, serving as the de facto standard for image format for importing graphics before TIFF came to fruition. Microtek proceeded to develop the first OCR, or Optical Character Recognition, program for text scanning, once more successfully integrating a core function of scanning with its machines.
1985: Microtek Lab, Inc.
In 1985, Microtek set up its United States subsidiary, Microtek Lab, Inc., in Cerritos, California. The company went public in 1988. It was one of Taiwan's initial technology initial public offerings. Microtek has research and development labs located in California and Taiwan dedicated to optics design, mechanical and electronic engineering, software development, product quality, and technological advancement. According to AnnaLee Saxenian's 2006 book The New Argonauts: Regional Advantage in a Global Economy, Microtek has produced more than 20% of the worldwide image scanner market.
1989: Ulead Systems
In 1989, Microtek invested in Ulead Systems (based in Taipei) which became the first publicly traded software company in Taiwan in 1999. Ulead System was founded by Lotus Chen, Lewis Liaw and Way-Zen Chen three colleagues from Taiwan's Institute for Information Industry. Microtek helped Ulead by jointly purchasing CCD sensors from Kodak which benefited both companies as it was a component not yet locally produced at the time.
Products
Herbarium Specimen Digitization
ObjectScan 1600 is an on-top scanner designed for capturing high resolution image of herbarium specimen. The device is bundled with ScanWizard Graphy which provides scanner setting and image correction tools. The maximum resolution is 1600 dpi.
ScanWizard Botany is a workstation software for specimen image processing, electronic data capture software, and uploading metadata to database or server. The software has OCR (Optical Character Recognition) function which can automatically detect label information and read barcode information on botanical collections. The information will be saved as metadata. It also includes image processing tools such as brightness adjustment and contrast adjustment.
MiVAPP Botany is a botanical database management system and web-server system. This system allows a botanical garden, university, and museum to share their collection online.
Operations
Taiwan
Microtek International Inc.: Headquarters, Science-Based Industrial Park, Hsinchu City
Taipei Office: Da-an District, Taipei City
Mainland China
Shanghai Microtek Technology Co., Ltd: Shanghai
Shanghai Microtek Medical Device Co., Ltd: Shanghai
Shanghai Microtek Trading Co., Ltd: Shanghai
Microtek Computer Technology (Wu Jiang) Co., Ltd: Jiangsu
See also
Ulead Systems
List of companies in Taiwan
References
1980 establishments in Taiwan
Computer peripheral companies
Display technology companies
Electronics companies of Taiwan
Manufacturing companies based in Hsinchu
Computer companies established in 1980
Manufacturing companies established in 1980
Companies listed on the Taiwan Stock Exchange
Taiwanese brands |
13508943 | https://en.wikipedia.org/wiki/LinDVD | LinDVD | LinDVD from Corel was a commercial proprietary software for Linux for the playback of DVDs and other multimedia files. The latest version supported ultra-mobile PCs (UMPCs) and mobile internet devices (MIDs), as well as a streaming media and a wider range of standard and high-definition video and audio encoding standards.
LinDVD can play copy protected (CSS) DVDs. Certain distributions like Mandriva have included this software in their commercial Linux distributions, and Dell is now preinstalling it on their Ubuntu systems.
Corel removed all information about LinDVD from their webpage, that is, LinDVD is not supported anymore.
See also
Comparison of media players
WinDVD
PowerDVD
VLC media player
References
External links
LinDVD on wiki.ubuntuusers.de (German)
InterVideo LinDVD
Software DVD players
Corel software
Linux DVD players |
22730314 | https://en.wikipedia.org/wiki/Comparison%20of%20netbook-oriented%20Linux%20distributions | Comparison of netbook-oriented Linux distributions | Netbooks are small laptops, with screen sizes between approximately 7 and 12 inches and low power consumption. They use either an SSD (solid state disk) or a HDD (hard disk drive) for storage, have up to 2 gigabytes of RAM (but often less), lack an optical disk drive, and usually have USB, Ethernet, WiFi and often Bluetooth connectivity. The name emphasizes their use as portable Internet appliances.
Netbook distributions
There are special Linux distributions, called netbook distributions, for these machines. All such distributions purport to be optimized for use with small, low-resolution displays. They tend to include a broad mix of VOIP and web-focused tools, including proprietary applications rarely seen installed by default by mainstream desktop distributions. For instance, Nokia Maemo and Asus' customized Xandros both ship with Skype and Adobe Flash installed, and Ubuntu's Netbook Edition offers the option to do the same for OEMs.
Comparison
Features
Specific Features
Google Trends
While no public numbers measuring the install-base of these operating systems are available, Google Trends data on a handful of them indicate their relative popularity:
References
See also
Android
List of Linux distributions that run from RAM
List of tools to create Live USB systems
Netbooks
Linux distributions |
43368666 | https://en.wikipedia.org/wiki/WordRake | WordRake | WordRake is a Seattle-based company that produces editing software of the same name. Gary Kinder — lawyer, New York Times best-selling author, and legal-writing expert — created the WordRake program in 2012. Gary has taught over 1000 writing programs to firms like Jones Day, WilmerHale, Latham & Watkins, Microsoft, KPMG, and NOAA.
WordRake software is intended to improve the brevity and clarity of writing. It quickly edits business reports, emails, correspondence, briefs, and memoranda to help make them clear and concise. WordRake is used in over 7000 law firms (its initial market), and in businesses, government agencies, and academia. In January 2013, the City of Seattle announced that it had installed WordRake for use in several municipal departments.
Reviews of WordRake have been generally positive while acknowledging the software’s limitations. The program works as an extension to Microsoft Word, with another version for Outlook, and, like automated spelling and grammar checking, WordRake can be prone to false positives.
The second version, WordRake 2, was released in summer 2014. WordRake for Outlook was released in September 2014.
In 2017, Micah Knapp Produced and Directed a corporate video for WordRake, written by and Exec Produced by Gary Kinder.
References
Software add-ons
Legal software companies
Legal writing
Editing software |
1144875 | https://en.wikipedia.org/wiki/NetNewsWire | NetNewsWire | NetNewsWire is a free and open-source news aggregator for macOS and iOS. It was introduced by Brent and Sheila Simmons on July 12, 2002, under their company Ranchero Software.
History
NetNewsWire was developed by Brent and Sheila Simmons for their company Ranchero Software. It was introduced on July 12, 2002, with NetNewsWire Lite, a free version missing some advanced features of the (then commercial) version, introduced some weeks later. Version 1.0 was released on February 11, 2003, and version 2.0 was released in May 2005. At that time it included custom feed views, custom downloading and opening of podcasts, synchronization of feeds and feed status between computers, Bloglines support, and a built-in tabbed browser.
In October 2005, NewsGator bought NetNewsWire, bringing their NewsGator Online RSS synchronization service to the Mac. Brent Simmons was hired by NewsGator to continue developing the software.
NetNewsWire 3.0 was released on June 5, 2007. The version added Spotlight indexing of news items, integration with iCal, iPhoto, Address Book, and VoodooPad, Growl support, a new user interface, performance enhancements, and more.
The application was originally shareware, but became free with the release of NetNewsWire 3.1 on January 10, 2008. NetNewsWire Lite was discontinued at the same time. NetNewsWire 3.2 moved to an advertisement-supported model with the option to purchase the application to remove ads.
An iOS version of NetNewsWire with support for the iPhone, iPod Touch and later for the iPad was released on the first day of the App Store. It included syncing of unread articles with the desktop version.
NetNewsWire Lite 4.0 was introduced on March 3, 2011 on the Mac App Store. While it misses several of the advanced features included in NetNewsWire 3.2, it includes a completely rewritten codebase which was used in the iOS version of the app and for NetNewsWire 4.0 which was released as shareware.
On June 3, 2011, the acquisition of NetNewsWire by Black Pixel was announced. For two years development had been apparently stalled, with a gap in updates from 2011 through the release of the version 4 Open Beta.
On June 24, 2013, NetNewsWire 4.0 was announced and released as an open beta by Black Pixel. This announcement also brought news that the product would be a commercial product with no free component (though the beta would be free to use through the final release).
The final release of NetNewsWire 4.0 occurred on September 3, 2015.
In 2017 support of JSON Feed was added into the code base.
On August 31, 2018, Black Pixel announced that they had returned the NetNewsWire intellectual property to Brent Simmons.
On September 1, 2018, Brent Simmons released NetNewsWire 5.0d1. It was a renamed version of his open source Mac RSS reader, "Evergreen". Almost a year later, NetNewsWire 5.0 was released on August 26, 2019.
On December 22, 2019, Brent Simmons started a public beta for the NetNewsWire iOS app which was distributed through TestFlight. The iOS version of NetNewsWire 5.0 was released March 9, 2020.
On March 27, 2021, Brent Simmons released NetNewsWire 6.0 for macOS along with a public beta for iOS which, again, was distributed through TestFlight.
On June 22, 2021, Brent Simmons released NetNewsWire 6.0 for iOS.
Reception
NetNewsWire was well regarded by many users and reviewers. According to FeedBurner, NetNewsWire was the most popular desktop newsreader on all platforms in 2005. The software received a Macworld Editor's Choice Award in 2003 and 2005 and maintained a 4.8 out of five stars rating among reviewers at VersionTracker (now CNET). Ars Technica called NetNewsWire's built-in browser "hands-down the best of any Mac newsreader," and Walter Mossberg, technology columnist for The Wall Street Journal, said that NetNewsWire is his favorite for the Mac.
NetNewsWire 5.0 was also received well. MacStories praised the RSS reader's search engine and general stability, but lamented that some advanced features and customization options had not made it into the release, calling 5.0 "a solid foundation for the future". Gizmodo wrote that NetNewsWire 5.0 was off to a promising start, but agreed that it lacked some of the features that might be expected by a power user.
See also
List of feed aggregators
Comparison of feed aggregators
References
External links
Atom (Web standard)
MacOS Internet software
Software based on WebKit
News aggregator software
2002 software |
1657551 | https://en.wikipedia.org/wiki/Automatic%20number-plate%20recognition | Automatic number-plate recognition | Automatic number-plate recognition (ANPR; see also other names below) is a technology that uses optical character recognition on images to read vehicle registration plates to create vehicle location data. It can use existing closed-circuit television, road-rule enforcement cameras, or cameras specifically designed for the task. ANPR is used by police forces around the world for law enforcement purposes, including to check if a vehicle is registered or licensed. It is also used for electronic toll collection on pay-per-use roads and as a method of cataloguing the movements of traffic, for example by highways agencies.
Automatic number-plate recognition can be used to store the images captured by the cameras as well as the text from the license plate, with some configurable to store a photograph of the driver. Systems commonly use infrared lighting to allow the camera to take the picture at any time of day or night. ANPR technology must take into account plate variations from place to place.
Privacy issues have caused concerns about ANPR, such as government tracking citizens' movements, misidentification, high error rates, and increased government spending. Critics have described it as a form of mass surveillance.
Other names
ANPR is sometimes known by various other terms:
Automatic (or automated) license-plate recognition (ALPR)
Automatic (or automated) license-plate reader (ALPR)
Automatic vehicle identification (AVI)
Automatisk nummerpladegenkendelse (ANPG)
Car-plate recognition (CPR)
License-plate recognition (LPR)
Lecture automatique de plaques d'immatriculation (LAPI)
Mobile license-plate reader (MLPR)
Vehicle license-plate recognition (VLPR)
Vehicle recognition identification (VRI)
Development
ANPR was invented in 1976 at the Police Scientific Development Branch in Britain. Prototype systems were working by 1979, and contracts were awarded to produce industrial systems, first at EMI Electronics, and then at Computer Recognition Systems (CRS, now part of Jenoptik) in Wokingham, UK. Early trial systems were deployed on the A1 road and at the Dartford Tunnel. The first arrest through detection of a stolen car was made in 1981. However, ANPR did not become widely used until new developments in cheaper and easier to use software were pioneered during the 1990s. The collection of ANPR data for future use (i.e., in solving then-unidentified crimes) was documented in the early 2000s. The first documented case of ANPR being used to help solve a murder occurred in November 2005, in Bradford, UK, where ANPR played a vital role in locating and subsequently convicting killers of Sharon Beshenivsky.
Components
The software aspect of the system runs on standard home computer hardware and can be linked to other applications or databases. It first uses a series of image manipulation techniques to detect, normalize and enhance the image of the number plate, and then optical character recognition (OCR) to extract the alphanumerics of the license plate. ANPR systems are generally deployed in one of two basic approaches: one allows for the entire process to be performed at the lane location in real-time, and the other transmits all the images from many lanes to a remote computer location and performs the OCR process there at some later point in time. When done at the lane site, the information captured of the plate alphanumeric, date-time, lane identification, and any other information required is completed in approximately 250 milliseconds. This information can easily be transmitted to a remote computer for further processing if necessary, or stored at the lane for later retrieval. In the other arrangement, there are typically large numbers of PCs used in a server farm to handle high workloads, such as those found in the London congestion charge project. Often in such systems, there is a requirement to forward images to the remote server, and this can require larger bandwidth transmission media.
Technology
ANPR uses optical character recognition (OCR) on images taken by cameras. When Dutch vehicle registration plates switched to a different style in 2002, one of the changes made was to the font, introducing small gaps in some letters (such as P and R) to make them more distinct and therefore more legible to such systems. Some license plate arrangements use variations in font sizes and positioning—ANPR systems must be able to cope with such differences in order to be truly effective. More complicated systems can cope with international variants, though many programs are individually tailored to each country.
The cameras used can be existing road-rule enforcement or closed-circuit television cameras, as well as mobile units, which are usually attached to vehicles. Some systems use infrared cameras to take a clearer image of the plates.
In mobile systems
During the 1990s, significant advances in technology took automatic number-plate recognition (ANPR) systems from limited expensive, hard to set up, fixed based applications to simple "point and shoot" mobile ones. This was made possible by the creation of software that ran on cheaper PC based, non-specialist hardware that also no longer needed to be given the pre-defined angles, direction, size and speed in which the plates would be passing the camera's field of view. Further scaled-down components at more cost-effective price points led to a record number of deployments by law enforcement agencies around the world. Smaller cameras with the ability to read license plates at higher speeds, along with smaller, more durable processors that fit in the trunks of police vehicles, allowed law enforcement officers to patrol daily with the benefit of license plate reading in real time, when they can interdict immediately.
Despite their effectiveness, there are noteworthy challenges related with mobile ANPRs. One of the biggest is that the processor and the cameras must work fast enough to accommodate relative speeds of more than 100 mph (160 km/h), a likely scenario in the case of oncoming traffic. This equipment must also be very efficient since the power source is the vehicle battery, and equipment must be small to minimize the space it requires.
Relative speed is only one issue that affects the camera's ability to actually read a license plate. Algorithms must be able to compensate for all the variables that can affect the ANPR's ability to produce an accurate read, such as time of day, weather and angles between the cameras and the license plates. A system's illumination wavelengths can also have a direct impact on the resolution and accuracy of a read in these conditions.
Installing ANPR cameras on law enforcement in the vehicles requires careful consideration of the juxtaposition of the cameras to the license plates they are to read. Using the right number of cameras and positioning them accurately for optimal results can prove challenging, given the various missions and environments at hand. Highway patrol requires forward-looking cameras that span multiple lanes and are able to read license plates at very high speeds. City patrol needs shorter range, lower focal length cameras for capturing plates on parked cars. Parking lots with perpendicularly parked cars often require a specialized camera with a very short focal length. Most technically advanced systems are flexible and can be configured with a number of cameras ranging from one to four which can easily be repositioned as needed. States with rear-only license plates have an additional challenge since a forward-looking camera is ineffective with oncoming traffic. In this case one camera may be turned backwards.
Algorithms
There are seven primary algorithms that the software requires for identifying a license plate:
Plate localization – responsible for finding and isolating the plate on the picture
Plate orientation and sizing – compensates for the skew of the plate and adjusts the dimensions to the required size
Normalization – adjusts the brightness and contrast of the image
Character segmentation – finds the individual characters on the plates
Optical character recognition
Syntactical/Geometrical analysis – check characters and positions against country-specific rules
The averaging of the recognised value over multiple fields/images to produce a more reliable or confident result, especially given that any single image may contain a reflected light flare, be partially obscured, or possess other obfuscating effects.
The complexity of each of these subsections of the program determines the accuracy of the system. During the third phase (normalization), some systems use edge detection techniques to increase the picture difference between the letters and the plate backing. A median filter may also be used to reduce the visual noise on the image.
Difficulties
There are a number of possible difficulties that the software must be able to cope with. These include:
Poor file resolution, usually because the plate is too far away but sometimes resulting from the use of a low-quality camera
Blurry images, particularly motion blur
Poor lighting and low contrast due to overexposure, reflection or shadows
An object obscuring (part of) the plate, quite often a tow bar, or dirt on the plate
Read license plates that are different at the front and the back because of towed trailers, campers, etc.
Vehicle lane change in the camera's angle of view during license plate reading
A different font, popular for vanity plates (some countries do not allow such plates, eliminating the problem)
Circumvention techniques
Lack of coordination between countries or states. Two cars from different countries or states can have the same number but different design of the plate.
While some of these problems can be corrected within the software, it is primarily left to the hardware side of the system to work out solutions to these difficulties. Increasing the height of the camera may avoid problems with objects (such as other vehicles) obscuring the plate but introduces and increases other problems, such as adjusting for the increased skew of the plate.
On some cars, tow bars may obscure one or two characters of the license plate. Bikes on bike racks can also obscure the number plate, though in some countries and jurisdictions, such as Victoria, Australia, "bike plates" are supposed to be fitted. Some small-scale systems allow for some errors in the license plate. When used for giving specific vehicles access to a barricaded area, the decision may be made to have an acceptable error rate of one character. This is because the likelihood of an unauthorized car having such a similar license plate is seen as quite small. However, this level of inaccuracy would not be acceptable in most applications of an ANPR system.
Imaging hardware
At the front end of any ANPR system is the imaging hardware which captures the image of the license plates. The initial image capture forms a critically important part of the ANPR system which, in accordance to the garbage in, garbage out principle of computing, will often determine the overall performance.
License plate capture is typically performed by specialized cameras designed specifically for the task, although new software techniques are being implemented that support any IP-based surveillance camera and increase the utility of ANPR for perimeter security applications. Factors which pose difficulty for license plate imaging cameras include the speed of the vehicles being recorded, varying level of ambient light, headlight glare and harsh environmental conditions. Most dedicated license plate capture cameras will incorporate infrared illumination in order to solve the problems of lighting and plate reflectivity.
Many countries now use license plates that are retroreflective. This returns the light back to the source and thus improves the contrast of the image. In some countries, the characters on the plate are not reflective, giving a high level of contrast with the reflective background in any lighting conditions. A camera that makes use of active infrared imaging (with a normal colour filter over the lens and an infrared illuminator next to it) benefits greatly from this as the infrared waves are reflected back from the plate. This is only possible on dedicated ANPR cameras, however, and so cameras used for other purposes must rely more heavily on the software capabilities. Further, when a full-colour image is required as well as use of the ANPR-retrieved details, it is necessary to have one infrared-enabled camera and one normal (colour) camera working together.
To avoid blurring it is ideal to have the shutter speed of a dedicated camera set to 1/1000 of a second. It is also important that the camera use a global shutter, as opposed to rolling shutter, to assure that the taken images are distortion-free. Because the car is moving, slower shutter speeds could result in an image which is too blurred to read using the OCR software, especially if the camera is much higher up than the vehicle. In slow-moving traffic, or when the camera is at a lower level and the vehicle is at an angle approaching the camera, the shutter speed does not need to be so fast. Shutter speeds of 1/500 of a second can cope with traffic moving up to 40 mph (64 km/h) and 1/250 of a second up to 5 mph (8 km/h). License plate capture cameras can produce usable images from vehicles traveling at .
To maximize the chances of effective license plate capture, installers should carefully consider the positioning of the camera relative to the target capture area. Exceeding threshold angles of incidence between camera lens and license plate will greatly reduce the probability of obtaining usable images due to distortion. Manufacturers have developed tools to help eliminate errors from the physical installation of license plate capture cameras.
Usage
Law enforcement
Australia
Several State Police Forces, and the Department of Justice (Victoria) use both fixed and mobile ANPR systems. The New South Wales Police Force Highway Patrol were the first to trial and use a fixed ANPR camera system in Australia in 2005. In 2009 they began a roll-out of a mobile ANPR system (known officially as MANPR) with three infrared cameras fitted to its Highway Patrol fleet. The system identifies unregistered and stolen vehicles as well as disqualified or suspended drivers as well as other 'persons of interest' such as persons having outstanding warrants.
Belgium
The city of Mechelen uses an ANPR system since September 2011 to scan all cars crossing the city limits (inbound and outbound). Cars listed on 'black lists' (no insurance, stolen, etc.) generate an alarm in the dispatching room, so they can be intercepted by a patrol.
As of early 2012, 1 million cars per week are automatically checked in this way.
Canada
Federal, provincial, and municipal police services across Canada use automatic licence plate recognition software; they are also used on certain toll routes and by parking enforcement agencies. Laws governing usage of information thus obtained use of such devices are mandated through various provincial privacy acts.
Denmark
The technique is tested by the Danish police. It has been in permanent use since mid 2016.
France
180 gantries over major roads have been built throughout the country. These together with a further 250 fixed cameras is to enable a levy of an eco tax on lorries over 3.5 tonnes. The system is currently being opposed and whilst they may be collecting data on vehicles passing the cameras, no eco tax is being charged.
Germany
On 11 March 2008, the Federal Constitutional Court of Germany ruled that some areas of the laws permitting the use of automated number plate recognition systems in Germany violated the right to privacy. More specifically, the court found that the retention of any sort of information (i.e., number plate data) which was not for any pre-destined use (e.g., for use tracking suspected terrorists or for enforcement of speeding laws) was in violation of German law.
These systems were provided by Jenoptik Robot GmbH, and called TraffiCapture.
Hungary
In 2012 a state consortium was formed among the Hungarian Ministry of Interior, the National Police Headquarters and the Central Commission of Public Administration and Electronic Services with the aim to install and operate a unified intelligent transportation system (ITS) with nationwide coverage by the end of 2015. Within the system, 160 portable traffic enforcement and data-gathering units and 365 permanent gantry installations were brought online with ANPR, speed detection, imaging and statistical capabilities. Since all the data points are connected to a centrally located ITS, each member of the consortium is able to separately utilize its range of administrative and enforcement activities, such as remote vehicle registration and insurance verification, speed, lane and traffic light enforcement and wanted or stolen vehicle interception among others.
Several Hungarian auxiliary police units also use a system called Matrix Police in cooperation with the police. It consists of a portable computer equipped with a web camera that scans the stolen car database using automatic number-plate recognition. The system is installed on the dashboard of selected patrol vehicles (PDA-based hand-held versions also exist) and is mainly used to control the license plate of parking cars. As the Auxiliary Police do not have the authority to order moving vehicles to stop, if a stolen car is found, the formal police is informed.
Saudi Arabia
Vehicle registration plates in Saudi Arabia use white background, but several vehicle types may have a different background. There are only 17 Arabic letters used on the registration plates. A challenge for plates recognition in Saudi Arabia is the size of the digits. Some plates use both Eastern Arabic numerals and the 'Western Arabic' equivalents. A research with source code is available for APNR Arabic digits.
Sweden
The technique is tested by the Swedish Police Authority at nine different locations in Sweden.
Turkey
Several cities have tested—and some have put into service—the KGYS (Kent Guvenlik Yonetim Sistemi, City Security Administration System), i.e., capital Ankara, has debuted KGYS- which consists of a registration plate number recognition system on the main arteries and city exits. The system has been used with two cameras per lane, one for plate recognition, one for speed detection. Now the system has been widened to network all the registration number cameras together, and enforcing average speed over preset distances. Some arteries have limit, and some , and photo evidence with date-time details are posted to registration address if speed violation is detected. As of 2012, the fine for exceeding the speed limit for more than 30% is approximately US$175.
Ukraine
The project of system integration «OLLI Technology» and the Ministry of Internal Affairs of Ukraine Department of State Traffic Inspection (STI) experiments on the introduction of a modern technical complex which is capable to locate stolen cars, drivers deprived of driving licenses and other problem cars in real time. The Ukrainian complex "Video control" working by a principle of video fixing of the car with recognition of license plates with check under data base.
United Kingdom
The Home Office states the purpose of automatic number-plate recognition in the United Kingdom is to help detect, deter and disrupt criminality including tackling organised crime groups and terrorists. Vehicle movements are recorded by a network of nearly 8000 cameras capturing between 25 and 30 million ANPR ‘read’ records daily. These records are stored for up to two years in the National ANPR Data Centre, which can be accessed, analysed and used as evidence as part of investigations by UK law enforcement agencies.
In 2012, the UK Parliament enacted the Protection of Freedoms Act which includes several provisions related to controlling and restricting the collection, storage, retention, and use of information about individuals. Under this Act, the Home Office published a code of practice in 2013 for the use of surveillance cameras, including ANPR, by government and law enforcement agencies. The aim of the code is to help ensure their use is "characterised as surveillance by consent, and such consent on the part of the community must be informed consent and not assumed by a system operator. Surveillance by consent should be regarded as analogous to policing by consent." In addition, a set of standards were introduced in 2014 for data, infrastructure, and data access and management.
United States
In the United States, ANPR systems are more commonly referred to as ALPR (Automatic License Plate Reader/Recognition) technology, due to differences in language (i.e., "number plates" are referred to as "license plates" in American English)
Mobile ANPR use is widespread among US law enforcement agencies at the city, county, state and federal level. According to a 2012 report by the Police Executive Research Forum, approximately 71% of all US police departments use some form of ANPR. Mobile ANPR is becoming a significant component of municipal predictive policing strategies and intelligence gathering, as well as for recovery of stolen vehicles, identification of wanted felons, and revenue collection from individuals who are delinquent on city or state taxes or fines, or monitoring for Amber Alerts. With the widespread implementation of this technology, many U.S. states now issue misdemeanor citations of up to $500 when a license plate is identified as expired or on the incorrect vehicle. Successfully recognized plates may be matched against databases including "wanted person", "protection order", missing person, gang member, known and suspected terrorist, supervised release, immigration violator, and National Sex Offender lists. In addition to the real-time processing of license plate numbers, ANPR systems in the US collect (and can indefinitely store) data from each license plate capture. Images, dates, times and GPS coordinates can be stockpiled and can help place a suspect at a scene, aid in witness identification, pattern recognition or the tracking of individuals.
The Department of Homeland Security has proposed a federal database to combine all monitoring systems, which was cancelled after privacy complaints. In 1998, a Washington, D.C. police lieutenant pleaded guilty to extortion after blackmailing the owners of vehicles parked near a gay bar. In 2015, the Los Angeles Police Department proposed sending letters to the home addresses of all vehicles that enter areas of high prostitution.
Early private sector mobile ANPR applications have been for vehicle repossession and recovery, although the application of ANPR by private companies to collect information from privately owned vehicles or collected from private property (for example, driveways) has become an issue of sensitivity and public debate. Other ANPR uses include parking enforcement, and revenue collection from individuals who are delinquent on city or state taxes or fines. The technology is often featured in the reality TV show Parking Wars featured on A&E Network. In the show, tow truck drivers and booting teams use the ANPR to find delinquent vehicles with high amounts of unpaid parking fines.
Laws
Laws vary among the states regarding collection and retention of license plate information. , 16 states have limits on how long the data may be retained, with the lowest being New Hampshire (3 minutes) and highest Colorado (3 years). The Supreme Court of Virginia ruled in 2018 that data collected from ALPRs can constitute personal information. As a result, on 1 April 2019, a Fairfax County judge issued an injunction prohibiting the Fairfax County Police Department from collecting and storing ALPR data outside of an investigation or intelligence gathering related to a criminal investigation. On October 22, 2020, the Supreme Court of Virginia overturned that decision, ruling that the data collected was not personal, identifying information.
In April 2020, the Massachusetts Supreme Judicial Court found that the warrantless use of automated license plate readers to surveil a suspected heroin distributor's bridge crossings to Cape Cod did not violate the Fourth Amendment to the United States Constitution only because of the limited time and scope of the observations.
Average-speed cameras
ANPR is used for speed limit enforcement in Australia, Austria, Belgium, Dubai (UAE), France, Italy, The Netherlands, Spain, South Africa, the UK, and Kuwait.
This works by tracking vehicles' travel time between two fixed points, and calculating the average speed. These cameras are claimed to have an advantage over traditional speed cameras in maintaining steady legal speeds over extended distances, rather than encouraging heavy braking on approach to specific camera locations and subsequent acceleration back to illegal speeds.
Italy
In Italian Highways has developed a monitoring system named Tutor covering more than 2500 km (2012). The Tutor system is also able to intercept cars while changing lanes. The Tutor or Safety Tutor is a joint project between the motorway management company - Autostrade per l'Italia - and the State Police. Over time it has been replaced by other versions for example the SICVe-PM where PM stands for PlateMatching and by the SICVe Vergilius. In addition to this average speed monitoring system, there are others Celeritas and T-Expeed v.2.
Netherlands
Average speed cameras (trajectcontrole) are in place in the Netherlands since 2002. As of July 2009, 12 cameras were operational, mostly in the west of the country and along the A12. Some of these are divided in several "sections" to allow for cars leaving and entering the motorway.
A first experimental system was tested on a short stretch of the A2 in 1997 and was deemed a big success by the police, reducing overspeeding to 0.66%, compared to 5 to 6% when regular speed cameras were used at the same location. The first permanent average speed cameras were installed on the A13 in 2002, shortly after the speed limit was reduced to 80 km/h to limit noise and air pollution in the area. In 2007, average speed cameras resulted in 1.7 million fines for overspeeding out of a total of 9.7 millions. According to the Dutch Attorney General, the average number of violation of the speed limits on motorway sections equipped with average speed cameras is between 1 and 2%, compared to 10 to 15% elsewhere.
United Kingdom
One of the most notable stretches of average speed cameras in the UK is found on the A77 road in Scotland, with being monitored between Kilmarnock and Girvan. In 2006 it was confirmed that speeding tickets could potentially be avoided from the 'SPECS' cameras by changing lanes and the RAC Foundation feared that people may play "Russian Roulette" changing from one lane to another to lessen their odds of being caught; however, in 2007 the system was upgraded for multi-lane use and in 2008 the manufacturer described the "myth" as "categorically untrue". There exists evidence that implementation of systems such as SPECS has a considerable effect on the volume of drivers travelling at excessive speeds; on the stretch of road mentioned above (A77 Between Glasgow and Ayr) there has been noted a "huge drop" in speeding violations since the introduction of a SPECS system.
Crime deterrent
Recent innovations have contributed to the adoption of ANPR for perimeter security and access control applications at government facilities. Within the US, "homeland security" efforts to protect against alleged "acts of terrorism" have resulted in adoption of ANPR for sensitive facilities such as embassies, schools, airports, maritime ports, military and federal buildings, law enforcement and government facilities, and transportation centers. ANPR is marketed as able to be implemented through networks of IP based surveillance cameras that perform "double duty" alongside facial recognition, object tracking, and recording systems for the purpose of monitoring suspicious or anomalous behavior, improving access control, and matching against watch lists. ANPR systems are most commonly installed at points of significant sensitivity, ingress or egress. Major US agencies such as the Department of Homeland Security, the Department of Justice, the Department of Transportation and the Department of Defense have purchased ANPR for perimeter security applications. Large networks of ANPR systems are being installed by cities such as Boston, London and New York City to provide citywide protection against acts of terrorism, and to provide support for public gatherings and public spaces.
The Center For Evidence-Based Crime Policy in George Mason University identifies the following randomized controlled trials of automatic number-plate recognition technology as very rigorous.
Enterprise security and services
In addition to government facilities, many private sector industries with facility security concerns are beginning to implement ANPR solutions. Examples include casinos, hospitals, museums, parking facilities, and resorts. In the US, private facilities typically cannot access government or police watch lists, but may develop and match against their own databases for customers, VIPs, critical personnel or "banned person" lists. In addition to providing perimeter security, private ANPR has service applications for valet / recognized customer and VIP recognition, logistics and key personnel tracking, sales and advertising, parking management, and logistics (vendor and support vehicle tracking).
Traffic control
Many cities and districts have developed traffic control systems to help monitor the movement and flow of vehicles around the road network. This had typically involved looking at historical data, estimates, observations and statistics, such as:
Car park usage
Pedestrian crossing usage
Number of vehicles along a road
Areas of low and high congestion
Frequency, location and cause of road works
CCTV cameras can be used to help traffic control centres by giving them live data, allowing for traffic management decisions to be made in real-time. By using ANPR on this footage it is possible to monitor the travel of individual vehicles, automatically providing information about the speed and flow of various routes. These details can highlight problem areas as and when they occur and help the centre to make informed incident management decisions.
Some counties of the United Kingdom have worked with Siemens Traffic to develop traffic monitoring systems for their own control centres and for the public. Projects such as Hampshire County Council's ROMANSE provide an interactive and real-time website showing details about traffic in the city. The site shows information about car parks, ongoing road works, special events and footage taken from CCTV cameras. ANPR systems can be used to provide average point-to-point journey times along particular routes, which can be displayed on a variable-message sign(VMS) giving drivers the ability to plan their route. ROMANSE also allows travellers to see the current situation using a mobile device with an Internet connection (such as WAP, GPRS or 3G), allowing them to view mobile device CCTV images within the Hampshire road network.
The UK company Trafficmaster has used ANPR since 1998 to estimate average traffic speeds on non-motorway roads without the results being skewed by local fluctuations caused by traffic lights and similar. The company now operates a network of over 4000 ANPR cameras, but claims that only the four most central digits are identified, and no numberplate data is retained.
IEEE Intelligent Transportation Systems Society published some papers on the plate number recognition technologies and applications.
Electronic toll collection
Toll roads
Ontario's 407 ETR highway uses a combination of ANPR and radio transponders to toll vehicles entering and exiting the road. Radio antennas are located at each junction and detect the transponders, logging the unique identity of each vehicle in much the same way as the ANPR system does. Without ANPR as a second system it would not be possible to monitor all the traffic. Drivers who opt to rent a transponder for C$2.55 per month are not charged the "Video Toll Charge" of C$3.60 for using the road, with heavy vehicles (those with a gross weight of over 5,000 kg) being required to use one. Using either system, users of the highway are notified of the usage charges by post.
There are numerous other electronic toll collection networks which use this combination of Radio frequency identification and ANPR. These include:
The Golden Gate Bridge in San Francisco, California, which began using an all-electronic tolling system combining Fastrak and ANPR on March 27, 2013
NC Quick Pass for the Interstate 540 (North Carolina) Triangle Expressway in Wake County, North Carolina
Bridge Pass for the Saint John Harbour Bridge in Saint John, New Brunswick
Quickpass at the Golden Ears Bridge, crossing the Fraser River between Langley and Maple Ridge
e-TAG, Australia
FasTrak in California, United States
Highway 6 in Israel
Tunnels in Hong Kong
Autopista Central in Santiago, Chile (site in Spanish)
E-ZPass in New York, New Jersey, Pennsylvania, Massachusetts (as Fast Lane until 2012), Virginia (formerly Smart Tag), and other states. Maryland Route 200 uses a combination of E-ZPass and ANPR.
TollTag in North Texas and EZ-Tag in Houston, Texas
I-Pass in Illinois
Pikepass in Oklahoma
Peach Pass I-85 Atlanta, Georgia (Gwinnett County)
OGS (Otomatik Geçiş Sistemi) used at Bosphorus Bridge, Fatih Sultan Mehmet Bridge, and Trans European Motorway entry points in İstanbul, Turkey
M50 Westlink Toll in Dublin, Ireland
Hi-pass in South Korea
Northern Gateway, SH 1, Auckland, New Zealand
Evergreen Point Floating Bridge, Seattle, and Washington State Route 167 HOT-lanes in western Washington
ETC in Taiwan
SunPass In Florida
Portugal
Portuguese roads have old highways with toll stations where drivers can pay with cards and also lanes where there are electronic collection systems. However most new highways only have the option of electronic toll collection system.
The electronic toll collection system comprises three different structures:
ANPR which works with infrared cameras and reads license plates from every vehicle
Lasers for volumetric measurement of the vehicle to confirm whether it is a regular car or an SUV or truck, as charges differ according to the type of vehicle
RFID-like to read on-board smart tags.
When the smart tag is installed in the vehicle, the car is quickly identified and owner's bank account is automatically deducted. This process is realized at any speed up to over 250 km per hour.
If the car does not have the smart tag, the driver is required to go to a pay station to pay the tolls between 3rd and 5th day after with a surplus charge. If he fails to do so, the owner is sent a letter home with a heavy fine. If this is not paid, it increases five-fold and after that, the car is inserted into a police database for vehicle impounding.
This system is also used in some limited access areas of main cities to allow only entry from pre-registered residents. It is planned to be implemented both in more roads and in city entrance toll collection/access restriction. The efficacy of the system is considered to be so high that it is almost impossible for the driver to complain.
London congestion charge
The London congestion charge is an example of a system that charges motorists entering a payment area. Transport for London (TfL) uses ANPR systems and charges motorists a daily fee of £11.50 if they enter, leave or move around within the congestion charge zone between 7 a.m. and 6:00 p.m., Monday to Friday. A reduced fee of £10.50 is paid by vehicle owners who sign up for the automatic deduction scheme. Fines for traveling within the zone without paying the charge are £65 per infraction if paid before the deadline, doubling to £130 per infraction thereafter.
There are currently 1,500 cameras which use automatic number plate recognition (ANPR) technology. There are also a number of mobile camera units which may be deployed anywhere in the zone.
It is estimated that around 98% of vehicles moving within the zone are caught on camera. The video streams are transmitted to a data centre located in central London where the ANPR software deduces the registration plate of the vehicle. A second data centre provides a backup location for image data.
Both front and back number plates are being captured, on vehicles going both in and out – this gives up to four chances to capture the number plates of a vehicle entering and exiting the zone. This list is then compared with a list of cars whose owners/operators have paid to enter the zone – those that have not paid are fined. The registered owner of such a vehicle is looked up in a database provided by the DVLA.
South Africa
In Johannesburg, South Africa, ANPR is used for the etoll fee collection. Owners of cars driving into or out of the inner city must pay a charge. The number of tolls passed depends on the distance travelled on the particular freeway. Some of the freeways with ANPR are the N12, N3, N1 etc.
Sweden
In Stockholm, Sweden, ANPR is used for the Stockholm congestion tax, owners of cars driving into or out of the inner city must pay a charge, depending on the time of the day. From 2013, also for the Gothenburg congestion tax, which also includes vehicles passing the city on the main highways.
Private use
Several UK companies and agencies use ANPR systems. These include Vehicle and Operator Services Agency (VOSA), Driver and Vehicle Licensing Agency (DVLA) and Transport for London.
Other uses
ANPR systems may also be used for/by:
Section control, to measure average vehicle speed over longer distances
Border crossings
Automobile repossessions
Petrol stations to log when a motorist drives away without paying for their fuel
A marketing tool to log patterns of use
Targeted advertising, a-la "Minority Report"-style billboards
Traffic management systems, which determine traffic flow using the time it takes vehicles to pass two ANPR sites
Analyses of travel behaviour (route choice, origin-destination etc.) for transport planning purposes
Drive-through customer recognition, to automatically recognize customers based on their license plate and offer them the items they ordered the last time they used the service
To assist visitor management systems in recognizing guest vehicles
Police and auxiliary police
Car parking companies
To raise or lower automatic bollards
Hotels
Enforcing Move over laws for emergency vehicles
Automated emissions testing
Challenges
Circumvention
Vehicle owners have used a variety of techniques in an attempt to evade ANPR systems and road-rule enforcement cameras in general. One method increases the reflective properties of the lettering and makes it more likely that the system will be unable to locate the plate or produce a high enough level of contrast to be able to read it. This is typically done by using a plate cover or a spray, though claims regarding the effectiveness of the latter are disputed. In most jurisdictions, the covers are illegal and covered under existing laws, while in most countries there is no law to disallow the use of the sprays. Other users have attempted to smear their license plate with dirt or utilize covers to mask the plate.
Novelty frames around Texas license plates were made illegal in Texas on 1 September 2003 by Texas Senate Bill 439 because they caused problems with ANPR devices. That law made it a Class C misdemeanor (punishable by a fine of up to US$200), or Class B (punishable by a fine of up to US$2,000 and 180 days in jail) if it can be proven that the owner did it to deliberately obscure their plates. The law was later clarified in 2007 to allow novelty frames.
If an ANPR system cannot read the plate, it can flag the image for attention, with the human operators looking to see if they are able to identify the alphanumerics.
In 2013 researchers at Sunflex Zone Ltd created a privacy license plate frame that uses near infrared light to make the license plate unreadable to license plate recognition systems.
Controversy
The introduction of ANPR systems has led to fears of misidentification and the furthering of 1984-style surveillance. In the United States, some such as Gregg Easterbrook oppose what they call "machines that issue speeding tickets and red-light tickets" as the beginning of a slippery slope towards an automated justice system:
"A machine classifies a person as an offender, and you can't confront your accuser because there is no accuser... can it be wise to establish a principle that when a machine says you did something illegal, you are presumed guilty?"
Similar criticisms have been raised in other countries. Easterbrook also argues that this technology is employed to maximize revenue for the state, rather than to promote safety.
The electronic surveillance system produces tickets which in the US are often in excess of $100, and are virtually impossible for a citizen to contest in court without the help of an attorney. The revenues generated by these machines are shared generously with the private corporation that builds and operates them, creating a strong incentive to tweak the system to generate as many tickets as possible.
Older systems had been notably unreliable; in the UK this has been known to lead to charges being made incorrectly with the vehicle owner having to pay £10 in order to be issued with proof (or not) of the offense. Improvements in technology have drastically decreased error rates, but false accusations are still frequent enough to be a problem.
Perhaps the best known incident involving the abuse of an ANPR database in North America is the case of Edmonton Sun reporter Kerry Diotte in 2004. Diotte wrote an article critical of Edmonton police use of traffic cameras for revenue enhancement, and in retaliation was added to an ANPR database of "high-risk drivers" in an attempt to monitor his habits and create an opportunity to arrest him. The police chief and several officers were fired as a result, and The Office of the Privacy Commissioner of Canada expressed public concern over the "growing police use of technology to spy on motorists."
Other concerns include the storage of information that could be used to identify people and store details about their driving habits and daily life, contravening the Data Protection Act along with similar legislation (see personally identifiable information). The laws in the UK are strict for any system that uses CCTV footage and can identify individuals.
Also of concern is the safety of the data once it is mined, following the discovery of police surveillance records lost in a gutter.
There is also a case in the UK for saying that use of ANPR cameras is unlawful under the Regulation of Investigatory Powers Act 2000. The breach exists, some say, in the fact that ANPR is used to monitor the activities of law-abiding citizens and treats everyone like the suspected criminals intended to be surveyed under the Act. The police themselves have been known to refer to the system of ANPR as a "24/7 traffic movement database" which is a diversion from its intended purpose of identifying vehicles involved in criminal activities. The opposing viewpoint is that where the plates have been cloned, a 'read' of an innocent motorist's vehicle will allow the elimination of that vehicle from an investigation by visual examination of the images stored. Likewise, stolen vehicles are read by ANPR systems between the time of theft and report to the Police, assisting in the investigation.
The Associated Press reported in August 2011 that New York Police Department cars and license plate tracking equipment purchased with federal HIDTA (High Intensity Drug Trafficking Area) funds were used to spy on Muslims at mosques, and to track the license plate numbers of worshipers.
Police in unmarked cars outfitted with electronic license plate readers would drive down the street and automatically catalog the plates of everyone parked near the mosque, amassing a covert database that would be distributed among officers and used to profile Muslims in public.
In 2013 the American Civil Liberties Union (ACLU) released 26,000 pages of data about ANPR systems obtained from local, state, and federal agencies through freedom of information laws. "The documents paint a startling picture of a technology deployed with too few rules that is becoming a tool for mass routine location tracking and surveillance" wrote the ACLU. The ACLU reported that in many locations the devices were being used to store location information on vehicles which were not suspected of any particular offense. "Private companies are also using license plate readers and sharing the information they collect with police with little or no oversight or privacy protections. A lack of regulation means that policies governing how long our location data is kept vary widely," the ACLU said. In 2012 the ACLU filed suit against the Department of Homeland Security, which funds many local and state ANPR programs through grants, after the agency failed to provide access to records the ACLU had requested under the Freedom of Information Act about the programs.
In mid-August 2015, in Boston, it was discovered that the license plate records for a million people was online and unprotected.
In April 2020, The Register UK with the help of security researchers discovered nine million ANPR logs left wide-open on the internet. The 3M Sheffield Council system had been online and unprotected since 2013-2014
Plate inconsistency and jurisdictional differences
Many ANPR systems claim accuracy when trained to match plates from a single jurisdiction or region, but can fail when trying to recognize plates from other jurisdictions due to variations in format, font, color, layout, and other plate features. Some jurisdictions offer vanity or affinity plates (particularly in the US), which can create many variations within a single jurisdiction.
From time to time, US states will make significant changes in their license plate protocol that will affect OCR accuracy. They may add a character or add a new license plate design. ALPR systems must adapt to these changes quickly in order to be effective. Another challenge with ALPR systems is that some states have the same license plate protocol. For example, more than one state uses the standard three letters followed by four numbers. So each time the ALPR systems alarms, it is the user's responsibility to make sure that the plate which caused the alarm matches the state associated with the license plate listed on the in-car computer. For maximum effectiveness, an ANPR system should be able to recognize plates from any jurisdiction, and the jurisdiction to which they are associated, but these many variables make such tasks difficult.
Currently at least one US ANPR provider (PlateSmart) claims their system has been independently reviewed as able to accurately recognize the US state jurisdiction of license plates, and one European ANPR provider claims their system can differentiate all EU plate jurisdictions.
Accuracy and measurement of ANPR system performance
A few ANPR software vendors publish accuracy results based on image benchmarks. These results may vary depending on which images the vendor has chosen to include in their test. In 2017, Sighthound reported a 93.6% accuracy on a private image benchmark. In 2017, OpenALPR reported accuracy rates for their commercial software in the range of 95-98% on a public image benchmark. April 2018 research from Brazil's Federal University of Paraná and Federal University of Minas Gerais obtained a recognition rate of 93.0% for OpenALPR and 89.8% for Sighthound, running both on the SSIG dataset; and a rate of 93.5% for a system of their own design based on the YOLO object detector, also using the SSIG dataset. Testing a "more realistic scenario" involving both plate and reader moving, the researchers obtained rates of less than 70% for the two commercial systems and 78.3% for their own.
See also
AI effect
Applications of artificial intelligence
Facial recognition system
Road policing unit
Vehicle location data
Lists
List of emerging technologies
Outline of artificial intelligence
References
Surveillance
Applications of computer vision
Artificial intelligence applications
Authentication methods
Electronic toll collection
Traffic enforcement systems
Road traffic management
Optical character recognition
Automatic identification and data capture
Articles containing video clips
Government by algorithm |
33194455 | https://en.wikipedia.org/wiki/IPadOS | IPadOS | iPadOS is a mobile operating system developed by Apple Inc. for its iPad line of tablet computers. It is a rebranded variant of iOS, the operating system used by Apple's iPhones, renamed to reflect the diverging features of the two product lines, particularly the iPad's multitasking capabilities and support for keyboard use. It was introduced as iPadOS 13 in 2019, reflecting its status as the successor to iOS 12 for the iPad, at the company's 2019 Worldwide Developers Conference. iPadOS was released to the public on September 24, 2019. The current public release is iPadOS 15.3.1, released on February 10, 2022.
History
The first iPad was released in 2010 and ran iPhone OS 3.2, which added support for the larger device to the operating system, previously only used on the iPhone and iPod Touch. This shared operating system was rebranded as "iOS" with the release of iOS 4.
The operating system initially had rough feature parity running on the iPhone, iPod Touch, and iPad, with variations in user interface depending on screen size, and minor differences in the selection of apps included. However, over time, the variant of iOS for the iPad incorporated a growing set of differentiating features, such as picture-in-picture, the ability to display multiple running apps simultaneously (both introduced with iOS 9 in 2015), drag and drop, and a dock that more closely resembled the one in macOS than the one on the iPhone (added in 2017 with iOS 11). Standard iPad apps were increasingly designed to support the optional use of a physical keyboard.
To emphasize the different feature set available on the iPad, and to signal their intention to develop the platforms in divergent directions, at WWDC 2019, Apple announced that the variant of iOS that runs on the iPad would be rebranded as "iPadOS". The new naming strategy began with iPadOS 13.1, in 2019.
On June 22, 2020, at WWDC 2020, Apple announced iPadOS 14, with compact designs for search, Siri, and calls, improved app designs, handwriting recognition, better AR features, enhanced privacy protections, and app widgets. iPadOS 14 was released to the public on September 16, 2020.
On June 7, 2021, at WWDC 2021, iPadOS 15 was announced with widgets on the Home Screen and App Library, the same features that came to the iPhone with iOS 14 in 2020. The update also brought stricter privacy measurements with Safari such as IP Address blocking so other websites cannot see it. iPadOS 15 was released to the public on September 20, 2021.
Features
Many features of iPadOS are also available on iOS; however, iPadOS contains some features that are not available in iOS and lacks some features that are available in iOS.
iPadOS 13
Home Screen
Unlike previous versions of iOS, the icon grid displays up to five rows and six columns of apps, regardless of whether the device is in portrait or landscape orientation. The first page of the home screen can be configured to show a column of widgets from applications for easy access. Spotlight Search is no longer part of the widgets but can still be accessed by swiping down from the center of the home screen or pressing Command + Space on a connected keyboard.
Multitasking
iPadOS features a multitasking system developed with more capabilities compared to iOS, with features like Slide Over and Split View that make it possible to use multiple different applications simultaneously. Double-clicking the Home Button or swiping up from the bottom of the screen and pausing will display all currently active spaces. Each space can feature a single app, or a Split View featuring two apps. The user can also swipe left or right on the Home Indicator to go between spaces at any time, or swipe left/right with four fingers.
While using an app, swiping up slightly from the bottom edge of the screen will summon the Dock, where apps stored within can be dragged to different areas of the current space to be opened in either Split View or Slide Over. Dragging an app to the left or right edge of the screen will create a Split View, which will allow both apps to be used side by side. The size of the two apps in Split View can be adjusted by dragging a pill-shaped icon in the center of the vertical divider and dragging the divider all the way to one side of the screen closes the respective app. If the user drags an app from the dock over the current app, it will create a floating window called Slide Over which can be dragged to either the left or right side of the screen. A Slide Over window can be hidden by swiping it off the right side of the screen, and swiping left from the right edge of the screen will restore it. Slide Over apps can also be cycled between by swiping left or right on the Home Indicator in the Slide Over window and pulling up on it will open an app switcher for Slide Over windows. A pill-shaped icon at the top of apps in Split View or Slide Over allows them to be switched in an out of Split View and Slide Over.
The user can now have several instances of a single app open at once. A new App Exposé mode has been added which allows the user to see all of the instances of an app.
In many applications, a notable exception being YouTube, videos can be shrunk down into a picture-in-picture window so the user can continue watching it while using other apps. This window containing the video can be resized by pinching and spreading and can be docked to any of the four corners of the screen. It can also be hidden by swiping it off the side of the screen and is denoted by an arrow at the edge where the video is hidden and swiping it will bring it back onscreen.
Safari
Safari now shows desktop versions of websites by default, includes a download manager, and has 30 new keyboard shortcuts if an external keyboard is connected.
Sidecar
Sidecar allows for an iPad to function as a second monitor for macOS, named in reference to articulated motorcycles. When using Sidecar, the Apple Pencil can be used to emulate a graphics tablet for applications like Photoshop. This feature is only supported on iPads that support the Apple Pencil. However, earlier versions of iPadOS 13 allowed all iPads compatible with iPadOS 13 to work with Sidecar.
Storage
iPadOS allows external storage, such as USB flash drives, portable hard drives, and solid state drives to be connected to an iPad via the Files app. iPad Pros from the 3rd generation above connects over USB-C, but the Lightning camera connection kit also works to connect external drives with previous iPads.
Mouse and trackpad support
Mouse and trackpad support was added in version 13.4.
iPadOS 14
Scribble
Introduced in iPadOS 14, Scribble converts text handwritten by an Apple Pencil into typed text in most text fields.
iPadOS 15
Widgets
Beginning with iPadOS 15, you can place widgets on the home screen.
Translate
Beginning with iPadOS 15, Translate is available. The feature was announced on June 7, 2021 at WWDC 2021. Translation works with 11 languages.
References
External links
– official site
– official developer site
iOS Reference Library at the Apple Developer site
IPad
Apple Inc. operating systems
Mach (kernel)
Mobile operating systems
Products introduced in 2019
Tablet operating systems |
1254288 | https://en.wikipedia.org/wiki/Versata | Versata | Versata is a business-rules based application development environment running in Java EE. It is a subsidiary of Trilogy, Inc.
History
Versata started in the early 1990s as a software consulting company called Vision Software. Over time it developed and sold software for Microsoft Visual Basic development market. Around 1994, it began development of an integrated development environment for applications. It included a GUI builder and a business rules engine that enabled developers to create a Web application rapidly using MS SQL Server or Oracle in the backend. The product, called Vision Jade, was released around 1997. It was enhanced to support three tier applications and Java thin clients.
Vision Software changed its key product and company name to Versata, went public in March 2000 and, on that day, was worth an astonishing $4 billion—astonishing considering that the company had revenues of about $60 million and was losing a lot of money; but this was during the Dot-com bubble. Despite hard times, Versata has managed to stay alive and maintain its customer base.
In November 2000, Versata expanded into the business workflow area with the acquisition of Verve, Inc..
From early 2001 through mid-2003 Versata's revenues were in quarter over quarter decline until Alan Baratz took over as CEO. Five consecutive quarters of growth followed until early 2005 when revenues once again took a downward plunge.
Mid-2005 the company was notified by NASDAQ that it no longer met NASDAQ's requirements for continued listing, related to maintenance of a minimum amount of shareholder's equity, market value, or net income. Rather than continue to focus on these requirements, the company decided to move to the OTC (also known as the Pink Sheets) in order to remain publicly traded.
On 7 December 2005, Versata announced that Austin based Trilogy, Inc. had made an offer to acquire the company by tender. That deal was consummated in February 2006, taking the company private.
Trilogy then proceeded to merge portions of Trilogy, specifically, Trilogy Technology Group, into Versata and began acquiring further companies, reorganizing dramatically and offshoring most technical positions to its office in Bangalore, India.
From 2006 to 2008, Versata continued to make acquisitions mostly in US. Most of the employees in the acquired companies were laid -off with the majority work being offshored to its India office in Bangalore.
In early 2009, Versata made another major overhaul of its business model when it asked all its employees in India to work as contractors through oDesk for a gDev which is an entity incorporated by Trilogy to manage its outsourcing activities. The only employees left in Versata were the ones in US.
A jury in the Eastern District of Texas awarded Versata Software $139m following its decision that SAP infringed two of Versata's patents - U.S. Patent No. 6,553,350 and U.S. Patent No. 5,878,400. Sam Baxter, Ted Stevenson, Scott Cole and Steve Pollinger of McKool Smith represented Versata on this case. iRunway India Private Limited and NTrak LLC were the technical consultants and provided end-to-end litigation support to McKool Smith.
The case has been rumbling on for a couple of years now, hinging on Versata-owned patents that cover mechanisms for pricing products. In January 2011, the judge in the case set aside the damages award, and ordered a new trial on damages.
In June, 2010 Versata filed an antitrust complaint against SAP AG. It alleges that SAP illegally excluded Versata from selling to vast majority of large ERP customers.
Acquisitions
On July 3, 2006, Versata acquired Artemis International Solutions Corporation, a provider of project and product portfolio management tools, including Artemis (software).
In September 2007, Versata acquired Nextance a provider of enterprise contract management solutions.
In November 2007, Versata acquired Gensym. Gensym is a provider of business rule engine software.
February 22, 2008 – Privately held Versata Enterprises, Inc. Announced the acquisition of NUVO Network Management Inc. Nuvo was a Canadian-based managed service provider/software provider.
February 25, 2008 - Versata acquired AlterPoint, a maker of Network Change and Configuration Management (NCCM) software.
March, 2008 - Versata acquired Tenfold Corporation. TenFold Corporation (OTC: TENF.OB)is a provider of EnterpriseTenFold SOA, an SOA-compliant, Ajax-enabled solutions framework for adding functionality to existing applications and building enterprise-scale applications.
In May 2008, Versata acquired Evolutionary Technologies International (ETI) and Clear Technologies.
On August 7, 2009 - Versata announced the acquisition of Everest Software, Inc. (Everest), a provider of retail and wholesale business management software.
On January 14, 2010 - Versata announced the acquisition of PurchasingNet, Inc. PurchasingNet, a Web-based provider of eProcurement, ePayables and Financial Management services and solutions to mid- and large-sized organizations.
References
External links
Gensym Home Page
Nextance Home Page
Versata Think3 Corporate Fraud?
AlterPoint Home Page
Software companies of the United States
Rule engines |
2285258 | https://en.wikipedia.org/wiki/Conway%27s%20law | Conway's law | Conway's law is an adage stating that organizations design systems that mirror their own communication structure. It is named after computer programmer Melvin Conway, who introduced the idea in 1967. His original wording was:
The law is based on the reasoning that in order for a software module to function, multiple authors must communicate frequently with each other. Therefore, the software interface structure of a system will reflect the social boundaries of the organizations that produced it, across which communication is more difficult. Conway's law was intended as a valid sociological observation, although sometimes it's used in a humorous context. It was dubbed Conway's law by participants at the 1968 National Symposium on Modular Programming.
In colloquial terms, it means software or automated systems end up "shaped like" the organizational structure they are designed in or designed for. Some interpretations of the law say this organizational pattern mirroring is a helpful feature of such systems, while other interpretations say it's merely a result of human nature or organizational bias.
Variations
Eric S. Raymond, an open-source advocate, restated Conway's law in The New Hacker's Dictionary, a reference work based on the Jargon File. The organization of the software and the organization of the software team will be congruent, he said. Summarizing an example in Conway's paper, Raymond wrote:
Raymond further presents Tom Cheatham's amendment of Conway's Law, stated as:
Yourdon and Constantine, in their 1979 book on Structured Design, gave a more strongly stated variation of Conway's Law:
James O. Coplien and Neil B. Harrison stated in a 2004 book concerned with organizational patterns of Agile software development:
Supporting evidence
An example of the impact of Conway's Law can be found in the design of some organization websites. Nigel Bevan stated in a 1997 paper, regarding usability issues in websites: "Organisations often produce web sites with a content and structure which mirrors the internal concerns of the organisation rather than the needs of the users of the site."
Evidence in support of Conway's law has been published by a team of Massachusetts Institute of Technology (MIT) and Harvard Business School researchers who, using "the mirroring hypothesis" as an equivalent term for Conway's law, found "strong evidence to support the mirroring hypothesis", and that the "product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization". The authors highlight the impact of "organizational design decisions on the technical structure of the artifacts that these organizations subsequently develop".
Additional and likewise supportive case studies of Conway's law have been conducted by Nagappan, Murphy and Basili at the University of Maryland in collaboration with Microsoft, and by Syeed and Hammouda at Tampere University of Technology in Finland.
See also
Cognitive dimensions of notations
Deutsch limit
Organizational theory
Good Regulator
References
Further reading
Alan MacCormack, John Rusnak & Carliss Baldwin, 2012, "Exploring the Duality between Product and Organizational Architectures: A Test of the 'Mirroring' Hypothesis," Research Policy 41:1309–1324 [earlier Harvard Business School Working Paper 08-039], see , accessed 9 March 2015.
Lise Hvatum & Allan Kelly, Eds., "What do I think about Conway's Law now? Conclusions of a EuroPLoP 2005 Focus Group," European Conference on Pattern Languages of Programs, Kloster Irsee, Germany, January 16, 2006, see , addressed 9 March 2015.
Lyra Colfer & Carliss Baldwin. "The Mirroring Hypothesis: Theory, Evidence and Exceptions." Harvard Business School Working Paper, No. 16-124, April 2016. (Revised May 2016.) See , accessed 2 August 2016.
Adages
Computer architecture statements
Software project management
Software design
Computer-related introductions in 1968 |
18946341 | https://en.wikipedia.org/wiki/Johannes%20Gehrke | Johannes Gehrke | Johannes Gehrke is a German computer scientist and the director of Microsoft Research in Redmond and CTO and Head of Machine Learning for the Microsoft Teams Backend. He is an ACM Fellow, an IEEE Fellow, and the recipient of the 2011 IEEE Computer Society Technical Achievement Award. From 1999 to 2015, he was a faculty member in the Department of Computer Science at Cornell University, where at the time of his leaving he was the Tisch University Professor of Computer Science.
Gehrke is best known for his contributions to database systems, data mining, and data privacy. He developed some of the fastest data mining algorithms for frequent pattern mining, sequential pattern mining, and decision tree construction and one of the first sensor network query processors which pioneered in-network query processing for wireless sensor networks, and he is known for his work on data privacy. His work on data privacy resulted in a new version of OnTheMap published by the US Census Bureau, the very first public data product published by any official government agency in the world with provable privacy guarantees (using a variant of Differential Privacy).
Education
Johannes Gehrke studied from 1990 to 1993 computer science at the Karlsruhe Institute of Technology; he received an M.S. degree from the Department of Computer Science at the University of Texas at Austin in 1995 and a PhD from the University of Wisconsin, Madison in 1999 for a thesis in data mining.
Career
From 1999 to 2015, Gehrke was a professor in the Department of Computer Science at Cornell University. His research group was popularly known as the Big Red Data Group, and he graduated 25 PhD students. From 2005 to 2008, he was Chief Scientist at Fast Search and Transfer. He has been in product groups at Microsoft since 2012, first building Delve and the Office Graph, then building people and feed experiences across all of Microsoft 365, and then serving as chief architect and head of AI of the Microsoft Teams backend. Since 2020, he has a dual role across research and product, managing all of Microsoft Research in Redmond and continuing as CTO and head of AI for the Microsoft Teams backend.
Gehrke received a National Science Foundation Career Award, a Sloan Research Fellowship, and a Humboldt Research Award. In 2011, he received the IEEE Computer Society Technical Achievement Award and a Blavatnik Award for Young Scientists. In 2014, he became a Fellow of the Association for Computing Machinery, and in 2020 he was elected an IEEE Fellow.
Books
Since its second edition, Gehrke has been a co-author of one of the main textbooks on database systems, commonly known as the Cow Book.
References
External links
Johannes Gehrke's homepage at Cornell: http://www.cs.cornell.edu/johannes/
Database research at Cornell University: http://www.cs.cornell.edu/bigreddata/
Johannes Gehrke at DBLP: http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/g/Gehrke:Johannes.html
Database researchers
Fellows of the Association for Computing Machinery
German computer scientists
University of Wisconsin–Madison College of Letters and Science alumni
Cornell University faculty
Living people
Data miners
Microsoft technical fellows
20th-century American engineers
21st-century American engineers
20th-century American scientists
21st-century American scientists
Year of birth missing (living people)
German emigrants to the United States
Date of birth missing (living people)
Place of birth missing (living people) |
61422361 | https://en.wikipedia.org/wiki/Ronald%20Barak | Ronald Barak | Ronald S. Barak (born June 7, 1943) is an American gymnast. At the 1961 Maccabiah Games he won eight gold medals, one silver medal, and one bronze medal. At the 1964 NCAA Men's Gymnastics Championships he won the all-around competition, the horizontal bars, and the parallel bars, and at the 1964 Amateur Athletic Union (AAU) National Gymnastics Competition he was the champion in the horizontal bars. He competed in eight events at the 1964 Summer Olympics.
Early and personal life
Barak was born in Los Angeles, California, and is Jewish. He attended Louis Pasteur Junior High School in West Los Angeles, and Alexander Hamilton High School in Los Angeles.
He then attended the University of Southern California (USC; B.S. with honors in physics, '64), and was awarded USC's Athlete of the Year Award in 1964. Barak also attended the University of Southern California Law School (J.D., '68), and became a partner at, chairman of the real estate section of, and co-managing partner of the law firm of Paul, Hastings, Janofsky & Walker, and was later a partner at the law firm of Manatt, Phelps & Phillips.
He authored the mystery novel A Season For Redemption (2010), and a novel, The Amendment Killer, a political thriller published in November 2017. He lives in Pacific Palisades, California.
Gymnastics career
In 1960 Barak was the LA City Schools horizontal bar champion.
Barak competed for the US in gymnastics at the 1961 Maccabiah Games, winning eight gold medals, one silver medal, and one bronze medal.
In 1962, Barak led the USC Trojans to a National Collegiate Athletics Association (NCAA) title in gymnastics, and won the all-around in the Big 6 Conference. He sat out 1963 with injuries.
At the 1964 NCAA Men's Gymnastics Championships, Barak won three individual titles—the all-around competition, the horizontal bars, and the parallel bars. At the 1964 Amateur Athletic Union (AAU) National Gymnastics Competition, he was the champion in the horizontal bars. He was named a National Association of Gymnastics Coaches First Team All-American in all-around, high bar, and parallel bars.
Barak was a member of the United States men's national gymnastics team that placed seventh in the team combined exercise competition at the 1964 Tokyo Olympics. He was 25th in the rings, 31st in the horizontal bars, 39th in the all-around competition out of 130 competitors, 45th in the parallel bars, 54th in the floor exercise, 67th in the pommel horse, and 95th in the vault.
From 1965 to 1968, while attending law school he was head coach of the USC Trojans varsity gymnastics team. In 1967 Barak was the coach of the United States gymnastics team that won a silver medal in the 1967 World University Games.
Halls of Fame
In 1990, Barak was inducted into the Southern California Jewish Sports Hall of Fame. In 1995 he was inducted into the U.S. Gymnastics Hall of Fame. In 2017 he was inducted into the Los Angeles City Schools Hall of Fame.
References
External links
1943 births
Living people
Jewish gymnasts
Jewish American sportspeople
Competitors at the 1961 Maccabiah Games
Maccabiah Games medalists in gymnastics
Maccabiah Games gold medalists for the United States
Maccabiah Games silver medalists for the United States
Maccabiah Games bronze medalists for the United States
USC Trojans athletes
USC Trojans coaches
USC Gould School of Law alumni
Paul Hastings partners
American male artistic gymnasts
Olympic gymnasts of the United States
Gymnasts at the 1964 Summer Olympics
Gymnasts from Los Angeles
Real property lawyers
Lawyers from Los Angeles
Jewish American attorneys
Jewish American novelists
21st-century American Jews |
51179020 | https://en.wikipedia.org/wiki/Fortive | Fortive | Fortive is an American diversified industrial technology conglomerate company headquartered in Everett, Washington. Fortive was spun off from Danaher in July 2016. Mitchell Rales and Steven M. Rales, Danaher's founders, retained board seats with Fortive after the separation. At the point of its independent incorporation, Fortive immediately became a component of the S&P 500. In 2016, Fortive controlled over 20 businesses in the areas of field instrumentation, transportation, sensing, product realization, automation, and franchise distribution. Later the transportation, automation and franchise distribution businesses would be spun off. In 2018 and 2019, Fortune named Fortive as a Future 50 company. In 2020, Fortune named Fortive one of the world's most admired companies along with other major tech companies like Apple, Amazon, and Microsoft. 2020 also marked the third year in a row Fortive has been named to the Fortune 500.
Acquisitions
2016
In September 2016, Fluke acquired eMaint, a CMMS IIoT system. In October 2016, Gilbarco Veeder-Root purchased Global Transportation Technologies (GTT), a provider in traffic management. These marked two early moves by Fortive subsidiaries in pursuit of Fortive's vision of building on their legacy heavy asset manufacturing businesses with software-enabled workflow technologies.
2017
In July 2017 Fortive acquired Pittsburgh-based Industrial Scientific, which manufactures gas detection products. Over time, Fortive would establish their second quarters in the Industrial Scientific building. Currently, many of their data science and analytics team sit here.
In September 2017, Fortive purchased Landauer for $770 million. Landauer is a provider of subscription-based technical and analytical services to determine occupational and environmental radiation exposure, as well as a domestic provider of outsourced medical physics services. They are headquartered in Glenwood, Illinois.
Fortive subsidiary Gilbarco Veeder Root acquired Orpak Systems, which delivers technologies to oil companies and commercial fleets.
2018
In July 2018, Fortive announced it was buying software firm Accruent for about $2 billion. Accruent makes software to track real estate and facilities. Also in July 2018, Fortive announced it would buy construction software company Gordian for $775 million from private equity firm Warburg Pincus. Gordian, based in Greenville, South Carolina, makes software that tracks costs of construction projects, manages facility operations and generally gives building companies more insight into big projects. In December 2018 Tektronix acquired Initial State.
2019
In June 2018, Fortive made a binding offer to buy Johnson & Johnson's Advanced Sterilization Products (ASP) business. The deal was valued at $2.8 billion, made up of $2.7 billion in cash from Fortive and $0.1 billion of retained net receivables, and closed in April 2019. Furthering their investment in healthcare, in November 2019 Fortive acquired Censis Technologies, a SaaS-based provider of inventory management in the surgical field, headquartered in Franklin, Tenn., from Riverside, a global private equity firm headquartered in Cleveland Ohio. Terms of the deal were not disclosed.
In July 2019, Fluke acquired Germany based Pruftechnik, in precision laser shaft alignment, condition monitoring, and non-destructive testing, to further their reliability business unit.
In addition to previous acquisitions, like Predictive Solutions (Jan 2017) and ActiveBlu (2017), Industrial Scientific has made multiple acquisitions as part of Fortive. In June 2019 Industrial Scientific entered in a definitive agreement to buy Canadian based Intelex, a provider of SaaS-based Environmental, Health, Safety and Quality (EHSQ) management software. The following month (July 2019), Industrial Scientific entered a definitive agreement to acquire SAFER Systems, a provider for the chemical, oil & gas, and transportation industries.
2020
In November, Intelex Technologies, a subsidiary of Industrial Scientific, announced that it has acquired ehsAI, a compliance automation technology provider.
2021
In July Fortive exercised their option on Team Sense, finalizing the acquisition form PSL. Later that month, Fortive entered into a definitive agreement with Bayard Capital and Accel Partners to acquire ServiceChannel, a leading global provider of SaaS-based multi-site facilities maintenance service solutions with an integrated service-provider network. Fortive anticipates that the acquisition will close in the third quarter of 2021. ServiceChannel FY 2021 revenue expected to be approximately $125 million (recurring revenue of approximately $117 million); expected long-term revenue growth rate of mid-teens. This acquisition expected to enhance Fortive’s revenue growth profile by approximately 50 basis points; anticipated to reach 10% ROIC target in 5 years. The purchase price for the acquisition is approximately $1.2 billion and is expected to be funded primarily with available cash. Upon closing, ServiceChannel is expected to be an independent operating company within Fortive’s Intelligent Operating Solutions segment. ServiceChannel expands Fortive’s leading offering of Facility and Asset Lifecycle workflow solutions, alongside Accruent and Gordian.
Divestitures
In October 2018, Fortive spun off their automation businesses, namely, Kollmorgen, Thomson, Portescap, Jacobs Vehicle Systems, and associated subsidiaries to Altra Industrial Motion for an estimated $3 billion.
In September 2019, Fortive announced its intention to split into two separate publicly traded companies. In January 2020, the name of the new company was announced as Vontier. Vontier would comprise the transportation and franchises businesses. Specifically, the spin off would included Gilbarco Veeder Root, Matco Tools, Hennessey, GTT, Teletrac Navman, and associated subsidiaries. An IPO was estimated to yield $1 billion, but was postponed in April 2020 due to uncertainty in the market stemming from the COVID pandemic. Instead, Fortive chose to spin the business off to shareholders in October 2020 without first having the IPO. Upon separation, Vontier replaced Noble Energy as a member of the S&P 500.
Innovation
Historically, growth in Fortive's portfolio companies had primarily come from M&A activity and operational excellence under the vaunted Danaher Business System. Fortive has sought to achieve organic growth through establishing a culture of experimentation and innovation. Early on, this meant working with Innovators DNA to incorporate best practices. Fortive's approach has proved successful. Notable new product introductions have included Tektronix MS05X "Elemental", Fluke's T6, Invetech's Formulate and Fill platform, Anderson-Negele's paperless recorder, and Fluke's ii900. In the middle of the COVID pandemic, when there was a shortage of N-95 masks, ASP quickly innovated on existing technology to allow hospitals to clean masks.
In May 2020, Fortive partnered with Pioneer Square Labs (PSL) to launch tech startups. In June 2020, Fortive, in partnership with PSL, launched TeamSense.
Culture
Fortive's culture is claimed to be founded on two key principles: continuous improvement and integrity & compliance.
Fortive's continuous improvement culture dates back to the Toyota Production System adopted by Danaher. At Fortive it has been branded as the Fortive Business System or FBS for short. FBS has evolved to fit the needs of Fortive's operating companies and strategic focus on innovation. FBS has been referenced on multiple earnings calls as a key driver for sustained growth and operational excellence within the various operating companies.
Fortive's culture around integrity and compliance has led to several initiatives around Inclusion and Diversity. In 2018, 2019 and 2020 Fortive was named among the best places to work for LGBTQ Equality by the Human Rights Campaign.
Fortive allows each of its associates a paid day of leave each year to volunteer in their local community. In 2020, Newsweek named Fortive as one of their top 500 most responsible companies.
References
External links
American companies established in 2016
Holding companies established in 2016
Companies listed on the New York Stock Exchange
Companies based in Everett, Washington
Conglomerate companies of the United States
2016 establishments in Washington (state)
Corporate spin-offs
Danaher Corporation |
1919696 | https://en.wikipedia.org/wiki/Simmons%20Bank%20Arena | Simmons Bank Arena | Simmons Bank Arena (previously Verizon Arena and Alltel Arena) is an 18,000-seat multi-purpose arena in North Little Rock, Arkansas, directly across the Arkansas River from downtown Little Rock. Opened in October 1999, it is the main entertainment venue serving the greater Little Rock area.
The Arkansas–Little Rock Trojans, now known for sports purposes as the Little Rock Trojans and representing the University of Arkansas at Little Rock in NCAA Division I sports, played home basketball games at the arena from the time when the arena opened until the team moved in 2005 to a new arena, the Jack Stephens Center, on the school's campus in Little Rock. The Arkansas RiverBlades, a defunct ice hockey team of the ECHL, the Arkansas RimRockers, a defunct minor league basketball team of the NBA Development League, and the Arkansas Twisters, a defunct af2 team, also played at the arena. The arena is also used for other events, including concerts, rodeos, auto racing, professional wrestling, and trade shows and conventions.
History
On August 1, 1995, Pulaski County, Arkansas, voters approved a one-year, one-cent sales tax for the purpose of building a multi-purpose arena, expanding the Statehouse Convention Center in Little Rock, and making renovations to the Main Street bridge between Little Rock and North Little Rock. $20 million of the sales tax proceeds went toward the Convention Center expansion, with the remainder used to build the arena.
That money, combined with a $20 million contribution from the State of Arkansas, $17 million from private sources and $7 million from Little Rock-based Alltel Corporation paid for the construction of a arena, which cost nearly $80 million to build. When the doors opened in 1999, the facility was paid for and there was no public indebtedness.
Two sites in North Little Rock drew interest from county officials for the proposed arena. The first was a commercial site west of Interstate 30, which contained a strip mall, a Kroger and an abandoned Kmart storefront. The second site was an plot at the foot of the Broadway Bridge.
The Pulaski County Multipurpose Civic Center Facilities Board selected the larger site for the arena in 1996 and paid $3.7 million for the land, some of which was acquired through eminent domain, a move protested in court by several landowners.
The second site later would be chosen for the new baseball stadium, Dickey-Stephens Park, constructed for the Arkansas Travelers. The Class AA minor-league baseball team moved from the then 73-year-old Ray Winder Field in Little Rock to a new $28 million home in North Little Rock at the start of the 2007 season.
The arena was the home of the 2003, 2006, and 2009 Southeastern Conference women's basketball tournament and the 2000 Sun Belt Conference men's basketball tournament. The arena holds the all-time attendance record for an SEC Women's Tournament when 43,642 people attended the event in 2003.
The arena hosted portions of the first and second rounds of the NCAA Men's Division I Basketball Tournament in March 2008 and the SEC Gymnastics Championships in 2007.
The arena is also used for other events: concerts (seating capacity is between 15,000 and 18,000 for end-stage concerts; the arena has an 80-by-40-foot portable stage); rodeos and auto racing (seating capacity is 14,000); and trade shows and conventions (there are of arena floor space plus of meeting space and of pre-function space). As a concert venue, its location prompted Bruce Springsteen and the E Street Band to play one of its most rarely performed numbers, 1973's "Mary Queen of Arkansas", during a March 2000 show on their Reunion Tour.
The arena is owned by the Multi-Purpose Civic Center Facilities Board for Pulaski County. The arena was designed by the Civic Center Design Team (CCDT), Burt Taggart & Associates, Architects/Engineers, The Wilcox Group, Garver & Garver Engineering and Rosser International of Atlanta.
The arena held the 2004, 2007 and 2009 American Idols LIVE! Tour concerts on August 13, 2004, July 13, 2007, and July 25, 2009, respectively.
The arena's 20-year naming rights were part of a $28.1 billion sale of Alltel to Verizon Wireless, effective on June 30, 2009, with Alltel Arena renamed as Verizon Arena.
Fleetwood Mac performed at Verizon Arena May 4, 2013 with surprise guests former President Bill Clinton and First Lady Hillary Clinton attending the show. Fleetwood Mac drummer Mick Fleetwood introduced the couple, who were seated in an arena suite, to the sold-out audience and dedicated the song "Don't Stop" to them, which was appropriately Bill Clinton's 1992 presidential election campaign song.
On October 5, 2016, the arena hosted the Kellogg's Tour of Gymnastics Champions.
With expiration of initial naming rights due in 2019, new naming rights for the arena were purchased by Arkansas-based Simmons Bank in a deal announced on November 9, 2018. The name change became official on October 3, 2019.
References
External links
American Basketball Association (2000–present) venues
Arena football venues
Little Rock Trojans men's basketball
Basketball venues in Arkansas
College basketball venues in the United States
Convention centers in Arkansas
Defunct basketball venues in the United States
Defunct indoor ice hockey venues in the United States
Gymnastics venues in the United States
Indoor arenas in Arkansas
Sports in Little Rock, Arkansas
Sports venues in Arkansas
Verizon Communications
Buildings and structures in Pulaski County, Arkansas
Tourist attractions in Pulaski County, Arkansas
1999 establishments in Arkansas
Sports venues completed in 1999 |
3831877 | https://en.wikipedia.org/wiki/ZIIP | ZIIP | In IBM System z9 and successor mainframes, the System z Integrated Information Processor (zIIP) is a special purpose processor. It was initially introduced to relieve the general mainframe central processors (CPs) of specific DB2 processing loads, but currently is used to offload other z/OS workloads as described below. The idea originated with previous special purpose processors, the zAAP, which offloads Java processing, and the IFL, which runs Linux and z/VM but not other IBM operating systems such as z/OS, DOS/VSE and TPF. A System z PU (processor unit) is "characterized" as one of these processor types, or as a CP (Central Processor), or SAP (System Assist Processor). These processors do not contain microcode or hardware features that accelerate their designated workloads. Instead, by relieving the general CP of particular workloads, they often lead to a higher workload throughput at reduced license fees.
DB2 for z/OS V8 was the first application to exploit the zIIP, but now there are several IBM and non-IBM products and technologies that exploit zIIP. The zIIP requires a System z9 or newer mainframe. The z/OS 1.8 and DB2 9 for z/OS support zIIPs. IBM also offers PTFs for z/OS 1.6, z/OS 1.7, and DB2 V8 to enable zIIP usage. (DB2 9 for z/OS is the first release of DB2 that has support built in.)
IBM publicly disclosed information about zIIP technology on January 24, 2006. The zIIP hardware (i.e. microcode, as the processors hardware does not currently differ from general purpose CPUs) became generally available in May, 2006. The z/OS and DB2 PTFs to take advantage of the zIIP hardware became generally available in late June, 2006.
zIIPs add lower cost capacity for four types of DB2 work:
Remote DRDA access via TCP/IP. This category includes JDBC and ODBC access to DB2, including access across LPARs via HiperSockets, such as Linux on IBM Z. The exception is access to DB2 V8 stored procedures, which redirect a small portion of the work. DB2 9 native remote SQL procedures do use the zIIP.
Parallel query operations. DB2 9 can increase the amount of parallel processing and thus use the zIIP more.
XML parsing in DB2 can use zIIP processors or zAAP processors.
Certain DB2 utilities processing.
Support for zIIPs
Although DB2 UDB for z/OS was the first product released that exploited zIIP processors, it is not limited to just DB2 or IBM products. The zIIP speciality CPU can also be used for IPSec processing in TCP/IP, certain general XML processing, and IBM's Scalable Architecture for Financial Reporting. In August, 2007, Shadow, a mainframe middleware product, now owned by Rocket Software, introduced the first zIIP eligible integration for environments other than DB2, expanding the benefit of specialty engines to include Adabas, CICS, IMS, IDMS and VSAM. Other third-party independent software vendors ("ISVs") have introduced support for execution of their products on zIIPs.
Those ISVs include, among others, Software AG, Compuware, CA Technologies, BMC Software, GT Software, Inc., and Phoenix Software International.
For example; the CA NetMaster Network Management for TCP/IP product can run both its main task and packet analyzer subtask on a zIIP. Rocket Software claims that their Shadow server will allow 99% of the integration processing, such as SQL to non-relational data queries and Web services/SOA workloads, to be zIIP eligible and run outside of the General Purpose Processor. Ivory Server for z/OS from GT Software, Inc. provides zIIP support for XML parsing, XML payload construction and data conversion processing. Additionally Ivory Server supports the zAAP processor using the optional IBM z/OS XML Services and the IFL processor with Linux on IBM Z. Ivory Server and Ivory Studio (the Ivory IDE) provide options that allow clients to manage the workload offloaded to the zIIP Specialty CPU from the GP CPU.
Commercial software developers, subject to certain qualification rules, may obtain technical details from IBM on how to take advantage of zIIP under a Non-Disclosure Agreement.
The IBM z13 merges the zAAP functionality with zIIPs so that zAAP-eligible work now uses zIIP instead. Furthermore, IFL and zIIP processors on the IBM z13, as they use the z13 microprocessor, have simultaneous multithreading (SMT) capability.
Use of zIIPs for IBM Z Common Data Provider
IBM Z Common Data Provider is a software that collects IT operational data from z/OS systems, transforms it to a consumable format, and streams it to analytics platforms. When IBM Z Common Data Provider is used to stream operational data, the zIIP offload function can be enabled, and then the System Data Engine component of IBM Z Common Data Provider can offload eligible work from general purpose processors to zIIP processors. This minimizes the MIPS consumption on general processors (GCPs) and reduces the total cost of ownership.
However, this offloading might add additional overhead in CPU time. If there is not enough capacity on zIIP processors, z/OS may redirect zIIP eligible work to general CPUs when all zIIPs are busy. The additional (overhead) CPU time to use zIIP processors can surpass the CPU time that is offloaded to zIIP processors. Or even, the general CPU usage is increased.
See also
Integrated Facility for Linux (IFL)
ZAAP
References
External links
IBM System z Integrated Information Processor (zIIP)
IBM mainframe technology |
41162422 | https://en.wikipedia.org/wiki/Jan%20Verelst%20%28scientist%29 | Jan Verelst (scientist) | Jan Verelst (born ca 1960) is a Belgian computer scientist, Professor and Dean of the Department of Management Information Systems at the University of Antwerp, and Professor at the Antwerp Management School, known for his work on Normalized Systems.
Biography
Verelst obtained his Ph.D. in Management Information Systems in 1999 from the University of Antwerp with a thesis entitled "De invloed van variabiliteit op de evolueerbaarheid van conceptuele modellen van informatiesystemen" (The impact of variability on the evolvability of conceptual models of information systems).
After his graduation he was appointed Professor of Systems Development Methodology at the Faculty of Applied Economics of the University of Antwerp, and Dean of the Department of Management Information Systems. He is also appointed as Professor at Antwerp Management School.
His research interests are in the field of "conceptual modeling of information systems, evolvability, and maintainability of information systems, empirical software engineering, and open source software", specifically the development of Normalized Systems, and its development methodology.
Publications
Verelst authored and co-authored many publications in his field of expertise. Books:
Articles, a selection:
Du Bois, Bart, Serge Demeyer, and Jan Verelst. "Refactoring-improving coupling and cohesion of existing code." Reverse Engineering, 2004. Proceedings. 11th Working Conference on. IEEE, 2004.
Hidders, J., Dumas, M., van der Aalst, W. M., ter Hofstede, A. H., & Verelst, J. (2005, January). "When are two workflows the same?." In Proceedings of the 2005 Australasian symposium on Theory of computing-Volume 41 (pp. 3–11). Australian Computer Society, Inc..
Ven, Kris, Jan Verelst, and Herwig Mannaert. "Should you adopt open source software?." IEEE Software 25.3 (2008): 54-59.
Huysmans, Philip, Kris Ven, and Jan Verelst. "Using the DEMO methodology for modeling open source software development processes." Information and Software Technology 52.6 (2010): 656-671.
References
External links
Verelst Jan, Antwerp Management School
Year of birth missing (living people)
Living people
Belgian computer scientists
University of Antwerp alumni
University of Antwerp faculty |
1668538 | https://en.wikipedia.org/wiki/Spatial%20Reuse%20Protocol | Spatial Reuse Protocol | Spatial Reuse Protocol is a networking protocol developed by Cisco. It is a MAC-layer (a sublayer of the data-link layer (Layer 2) within the OSI Model) protocol for ring-based packet internetworking that is commonly used in optical fiber ring networks. Ideas from the protocol are reflected in parts of the IEEE 802.17 Resilient Packet Ring (RPR) standard.
Introduction
SRP was first developed as a data-link layer protocol to link Cisco's Dynamic Packet Transport (DPT) protocol (a method of delivering packet-based traffic over a SONET/SDH infrastructure) to the physical SONET/SDH layer. DPT cannot communicate directly with the physical layer, therefore it was necessary to develop an intermediate layer between DPT and SONET/SDH, SRP filled this role.
Analogy to POS
SRP behaves quite like the Point-to-Point Protocol (PPP) does in a Packet Over SONET (POS) environment. PPP acts as an abstraction layer between a higher level layer 2 technology such as POS and a layer 1 technology such as SONET/SDH. Layer 1 and high level layer 2 protocols cannot interact directly without having an intermediate low level layer 2 protocol, in the case of DPT the layer 2 protocol is SRP.
Spatial Reuse Capability
DPT environments contain dual, counter-rotating rings, somewhat like FDDI. SRP has a unique bandwidth efficiency mechanism which allows multiple nodes on the ring to utilize the entirety of its bandwidth, this mechanism is called the Spatial Reuse Capability. Nodes in an SRP environment can send data directly from source to destination. Consider the following environment: a ring with 6 routers (A through F sequentially) operating at OC-48c speed (2.5 Gbit/s). Routers A and D are sending data back and forth at 1.5 Gbit/s while routers B and C are sending data at 1 Gbit/s, this utilizes the entire 2.5 Gbit/s across routers A through D but still leaves routers F and E untouched. This means that routers F and E can be sending data at 2.5 Gbit/s between each other concurrently, resulting in the total throughput of the ring being 5 Gbit/s. The reason for this is the implementation of a method called "destination stripping". Destination stripping means that the destination of the data removes it from the ring network, this differs from "source stripping" in that the data is only present on the section of network between the source and destination nodes. In source stripping, the data is present all the way around the ring and is removed by the source node. FDDI and Token Ring networks use source stripping, whereas DPT and SRP use destination stripping. Again, consider the previous example of the OC-48c ring. In a source stripping (FDDI or Token Ring) environment, in the event that router A wanted to communicate with router D, the entire network would be taken up while the data was being transmitted because it would have to wait until it completed the loop and got back to router A before it was eliminated. In a destination stripping (DPT and SRP) environment, the data would only be present between router A and Router D and the rest of the network would be free to communicate.
SRP Header
The SRP header is 16 bits (2 bytes) total. It contains 5 fields. These fields are as follows: Time to Live (TTL), Ring Identifier (R), Priority (PRI), Mode, and Parity (P). The TTL field is 8 bits, its only metric is hop count. The R field is 1 bit (either 0 or 1 designating the inner or outer ring). The PRI field is 3 bits designating the packet priority. The Mode field is 3 bits designating what type of data is contained in the payload. The P field is 1 bit.
References
RFC2892: Spatial Reuse Protocol
Tomsu, Peter; Schmutzer, Christian, "Next Generation Optical Networks" pp. 105-113, Prentice-Hall (c) 2002
Cisco protocols |
1797747 | https://en.wikipedia.org/wiki/Euro%20sign | Euro sign | The euro sign () is the currency sign used for the euro, the official currency of the eurozone and unilaterally adopted by Kosovo and Montenegro. The design was presented to the public by the European Commission on 12 December 1996. It consists of a stylized letter E (or epsilon), crossed by two lines instead of one. In English, the sign immediately precedes the value (for instance, €10); in most other European languages, it follows the value, usually but not always with an intervening space (for instance, 10€, 10€).
Design
There were originally 32 proposed designs for a symbol for Europe's new common currency; the Commission short-listed these to ten candidates. These ten were put to a public survey. After the survey had narrowed the original ten proposals down to two, it was up to the Commission to choose the final design. The other designs that were considered are not available for the public to view, nor is any information regarding the designers available for public query. The Commission considers the process of designing to have been internal and keeps these records secret. The eventual winner was a design created by a team of four experts whose identities have not been revealed. It is assumed that the Belgian graphic designer Alain Billiet was the winner and thus the designer of the euro sign.
The official story of the design history of the euro sign is disputed by Arthur Eisenmenger, a former chief graphic designer for the European Economic Community, who says he had the idea 25 years before the Commission's decision.
The Commission specified a euro logo with exact proportions and colours (PMS Yellow foreground, PMS Reflex Blue background), for use in public-relations material related to the euro introduction. While the Commission intended the logo to be a prescribed glyph shape, type designers made it clear that they intended instead to adapt the design to be consistent with the typefaces to which it was to be added.
Use on computers and mobile phones
Generating the euro sign using a computer depends on the operating system and national conventions. Initially, some mobile phone companies issued an interim software update for their special SMS character set, replacing the less-frequent Japanese yen sign with the euro sign. Subsequent mobile phones have both currency signs.
The euro is represented in the Unicode character set with the character name EURO SIGN and the code position U+20AC (decimal 8364) as well as in updated versions of the traditional Latin character set encodings. In HTML, the entity can also be used.
History of implementation
An implicit character encoding, along with the fact that the code position of the euro sign is different in historic encoding schemes (code pages), led to many initial problems displaying the euro sign consistently in computer applications, depending on access method. While displaying the euro sign was no problem as long as only one system was used (provided an up-to-date font with the proper glyph was available), mixed setups often produced errors. Initially, Apple, Microsoft and Unix systems each chose a different code point to represent a euro symbol: thus a user of one system might have seen a euro symbol whereas another would see a different symbol or nothing at all. Another was legacy software which could only handle older encodings such as pre-euro ISO 8859-1. In such situations character set conversions had to be made, often introducing conversion errors such as a question mark (?) being displayed instead of a euro sign. With widespread adoption of Unicode and UTF-8 encoding these issues rarely arise in modern computing.
Entry methods
Depending on keyboard layout and the operating system, the symbol can be entered as:
(UK/IRL)
(US INTL/ESP/SWE)
(BEL/ESP/FIN/FRA/GER/ITA/GRE/POR/CZE/EST/LTU/SVK/SWE/ROS/ROP)
(HU/PL)
(UK/IRL)
(US INTL/ESP)
in Microsoft Word in United States and more layouts
+ in Microsoft Windows (depends on system locale setting)
followed by in Chrome OS, most Linux distros, and in other operating systems using IBus.
followed by in the Vim text editor
On the macOS operating system, a variety of key combinations are used depending on the keyboard layout, for example:
in British layout
in United States layout
in Slovenian layout
in French layout
in German, Spanish and Italian layout
in Swedish layout
The Compose key sequence for the euro sign is followed by .
Typewriters
Classical typewriters are still used in many parts of the world, often recycled from businesses that have adopted desktop computers. Typewriters lacking the euro sign can imitate it by typing a capital "C", backspacing, and overstriking it with the equals sign.
Use
Placement of the sign varies. Countries have generally continued the style used for their former currencies. In those countries where previous convention was to place the currency sign before the figure, the euro sign is placed in the same position (e.g., €3.50). In those countries where the amount preceded the national currency sign, the euro sign is again placed in that relative position (e.g., 3,50 €).
The European Union’s Interinstitutional Style Guide (for EU staff) states that the euro sign should be placed in front of the amount without any space in English, but after the amount in most other languages.
In English language newspapers and periodicals, the euro sign—like the dollar sign ($) and the pound sign (£)—is placed before the figure, unspaced, as used by publications such as the Financial Times and The Economist. When written out, "euro" is placed after the value in lower case; the plural is used for two or more units, and (in English) euro cents are separated with a point, not a comma (e.g., 1.50 euro, 14 euros).
Prices of items costing less than one euro (for example ten cents) are often written using a local abbreviation like "ct." (particularly in Germany, Spain, and Lithuania), "snt." (Finland), c. (Ireland) and Λ (the capital letter lambda for Λεπτό Leptó in Greece): (for example, 10 ct., 10c., 10Λ, 10 snt. The US style "¢" or "¢" is rarely seen in formal contexts. Alternatively, they can be written as decimals e.g. 0.07 €.
See also
List of currency symbols currently in use
Notes
References
External links
Euro name and symbol, Directorate-General for Economic and Financial Affairs of the European Commission
Communication from the Commission: The use of the Euro symbol, July 1997, Directorate-General for Economic and Financial Affairs of the European Commission
Typing a Euro symbol on a non-European QWERTY keyboard. Several methods are shown for and others special characters.
Currency symbols
Euro
Symbols introduced in 1996
Symbols of the European Union |
650909 | https://en.wikipedia.org/wiki/Security%20printing | Security printing | Security printing is the field of the printing industry that deals with the printing of items such as banknotes, cheques, passports, tamper-evident labels, security tapes, product authentication, stock certificates, postage stamps and identity cards. The main goal of security printing is to prevent forgery, tampering, or counterfeiting. More recently many of the techniques used to protect these high-value documents have become more available to commercial printers, whether they are using the more traditional offset and flexographic presses or the newer digital platforms. Businesses are protecting their lesser-value documents such as transcripts, coupons and prescription pads by incorporating some of the features listed below to ensure that they cannot be forged or that alteration of the data cannot occur undetected.
A number of technical methods are used in the security printing industry. Security printing is most often done on security paper, but it can also occur on plastic materials.
Substrate
Paper
The substrate of most banknotes is made of paper, almost always from cotton fibres for strength and durability; in some cases linen or speciality coloured or forensic fibres are added to give the paper added individuality and protect against counterfeiting. Paper substrate may also include windows based on laser-cut holes covered by a security foil with holographic elements. All of this makes it difficult to reproduce using common counterfeiting techniques.
Polymer
Some countries, including Canada, Nigeria, Romania, Mexico, Hong Kong, New Zealand, Israel, Singapore, Malaysia, United Kingdom and Australia, produce polymer (plastic) banknotes, to improve longevity and to make counterfeiting more difficult. Polymer can include transparent windows, diffraction grating and raised printing.
Watermarks
True watermarks
A true watermark is a recognizable image or pattern in paper that appears lighter or darker than surrounding paper when viewed with a light from behind the paper, due to paper density variations. A watermark is made by impressing a water coated metal stamp or dandy roll onto the paper during manufacturing. Watermarks were first introduced in Bologna, Italy in 1282; as well as their use in security printing, they have also been used by paper makers to identify their product. For proofing the authenticity, the thinner part of the watermark will shine brighter with a light source in the background and darker with a dark background. The watermark is a proven anti-counterfeit technology because most counterfeits only simulate its appearance by using a printing pattern.
Simulated watermarks
Printed with white ink, simulated watermarks have a different reflectance than the base paper and can be seen at an angle. Because the ink is white, it cannot be photocopied or scanned. A similar effect can be achieved by iriodin varnish which creates reflections under certain viewing angles only and is transparent otherwise.
Watermarks are sometimes simulated on polymer currency by printing an according pattern, but with little anti-counterfeiting effect. For example, the Australian dollar has its coat of arms watermarked on all its plastic bills. A Diffractive Optical Element (DOE) within the transparent window can create a comparable effect but requires a laser beam for its verification.
Intaglio printing
Intaglio is a printing technique in which the image is incised into a surface. Normally, copper or zinc plates are used, and the incisions are created by etching or engraving the image, but one may also use mezzotint. In printing, the surface is covered in ink, and then rubbed vigorously with tarlatan cloth or newspaper to remove the ink from the surface, leaving it in the incisions. A damp piece of paper is placed on top, and the plate and paper are run through a printing press that, through pressure, transfers the ink to the paper.
The very sharp printing obtained from the intaglio process is hard to imitate by other means. Intaglio also allows for the creation of latent images which are only visible when the document is viewed at a very shallow angle.
Geometric lathe work
A guilloché is an ornamental pattern formed of two or more curved bands that interlace to repeat a circular design. They are made with a geometric lathe.
Microprinting
This involves the use of extremely small text, and is most often used on currency and bank checks. The text is generally small enough to be indiscernible to the naked eye. Cheques, for example, use microprint as the signature line.
Optically variable ink
Optically Variable Ink (OVI) displays different colors depending on the angle at which it is viewed. It uses mica-based glitter.
Colored magnetizable inks are prepared by including chromatic pigments of high color strength. The magnetic pigments’ strong inherent color generally reduces the spectrum of achievable shades. Generally, pigments should be used at high concentrations to ensure that sufficient magnetizable material is applied even in thin offset coats. Some magnetic pigment are best suited for colored magnetizable inks due to their lower blackness.
Homogeneous magnetization (no preferred orientation) is easily obtained on pigment made of spherical particles. Best results are achieved when remanence and coercive field strength are very low and the saturating magnetization is high.
When pearlescent pigments are viewed at different angles the angle of the light as it's perceived makes the color appear to change as the magnetic fields within the particles shift direction.
Holograms
A hologram may be embedded either via hot-stamping foil, wherein an extremely thin layer of only a few micrometers of depth is bonded into the paper or a plastic substrate by means of a hot-melt adhesive (called a size coat) and heat from a metal die, or it may be directly embossed as holographic paper, or onto the laminate of a card itself.
When incorporated with a custom design pattern or logo, hologram hot stamping foils become security foils that protect credit cards, passports, bank notes and value documents from counterfeiting. Holograms help in curtailing forging, and duplication of products hence are very essential for security purposes. Once stamped on a product, they cannot be removed or forged, enhancing the product at the same time. Also from a security perspective, if stamped, a hologram is a superior security device as it is virtually impossible to remove from its substrate.
Security threads
Metal threads and foils, from simple iridescent features to foil color copying to foils with additional optically variable effects are often used.
There are two kinds of security threads. One is a thin aluminum coated and partly de-metallized polyester film thread with microprinting which is embedded in the security paper as banknote or passport paper. The other kind of security thread is the single or multicolor sewing thread made from cotton or synthetic fibers, mostly UV fluorescent, for the bookbinding of passport booklets. In recent designs the security thread was enhanced with other security features such as holograms or three-dimensional effects when tilted.
On occasion, the banknote designers succumb to the Titanic effect (excess belief in the latest technology), and place too much faith in some particular trick. An example is the forgery of British banknotes in the 1990s. British banknotes in the 1990s featured a "windowed" metal strip through the paper about 1 mm wide that comes to the paper surface every 8 mm. When examined in reflected light, it appears to have a dotted metallic line running across it, but when viewed through transmitted light, the metal strip is dark and solid.
Duplicating this was thought to be difficult, but a criminal gang was able to reproduce it quickly. They used a cheap hot-stamping process to lay down a metal strip on the surface of the paper, then printed a pattern of solid bars over it using white ink to leave the expected metal pattern visible. At their trial, they were found to have forged tens of millions of pounds’ worth of notes over a period of years.
Magnetic ink
Because of the speed with which they can be read by computer systems, magnetic ink character recognition is used extensively in banking, primarily for personal checks. The ink used in magnetic ink character recognition (MICR) technology is also used to greatly reduce errors in automated (or computerized) reading. The pigment is dispersed in a binder system (resin, solvent) or a wax compound and applied either by pressing or by hot melt to a carrier film (usually polyethylene).
Some people believe that the magnetic ink was intended as a fraud prevention concept, yet the original intent was to have a non-optical technology so that writing on the cheque, like signatures, would not interfere with reading. The main magnetic fonts (E13-B and CMC7) are downloadable for a small fee and in addition magnetic toner is available for many printers. Some higher resolution toners have sufficient magnetic properties for magnetic reading to be successful without special toner.
Serial numbers
Serial numbers help make legitimate documents easier to track and audit. To help detect forgeries serial numbers will normally have a check digit to verify the serial number. In banknote printing the unique serial number provides effective means for the monitoring and verification of the production volume.
Another method of protection is to create trap numbers within the serial number range. For example, the system may automatically invalidate numbers which are in a range of 200-300 (e.g. 210, 205 would be invalid). The system may even take single random numbers within a block (e.g. ending in 51, 37, 48 within a 200-300 range would be invalid).
Anti-copying marks
In the late twentieth century advances in computer and photocopy technology made it possible for people without sophisticated training to easily copy currency. In an attempt to prevent this, banks have sought to add filtering features to the software and hardware available to the public that senses features of currency, and then locks out the reproduction of any material with these marks. One known example of such a system is the EURion constellation.
Copy-evident
Sometimes only the original document has value. An original signed cheque for example has value but a photocopy of it does not. An original prescription script can be filled but a photocopy of it should not be. Copy-evident technologies provide security to hard copy documents by helping distinguish between the original document and the copy.
The most common technology to help differentiate originals from copies is the void pantograph. Void pantographs are essentially invisible to the untrained, naked eye on an original but when scanned or copied the layout of lines, dots and dashes will reveal a word (frequently VOID and hence the name) or symbol that clearly allows the copy to be identified. This technology is available on both traditional presses (offset and flexographic) and on the newer digital platforms. The advantage of a digital press is that in a single pass through the printer a void pantograph with all the variable data can be printed on plain paper.
Copy-evident paper, sometimes marketed as ‘security paper’, is pre-printed void pantograph paper that was usually produced on an offset or flexographic press. The quality of the void pantograph is usually quite good because it was produced on a press with a very high resolution, and, when only a small number of originals are to be printed, it can be a cost-effective solution; however, the advent of the digital printer has rapidly eroded this benefit.
A second technology which complements and enhances the effectiveness of the void pantograph is the Verification Grid. This technology is visible on the original, usually as fine lines or symbols but when photocopied these lines and images disappear; the inverse reaction of the void pantograph. The most common examples of this technology are on the fine lines at the edge of a cheque which will disappear when copied or on a coupon when a symbol, such as a shopping cart, disappears when an unauthorized copy is made. Verification Grid is available for either traditional or digital presses.
Together the void pantograph and the Verification Grid complement each other because the reactions to copying are inverse, resulting in a higher degree of assurance that a hard copy document is an original.
Prismatic coloration
The use of color can greatly assist the prevention of forgeries. By including a color on a document a color photocopier must be used in the attempt to make a copy however the use of these machines also tends to enhance the effectiveness of other technologies such as Void Pantographs and Verification Grids (see Copy-evident above).
By using two or more colors in the background and blending them together a prismatic effect can be created. This can be done on either a traditional or a digital press. When a document using this technique is attempted to be photocopied the scanning and re-creation by a color copier is inexact usually resulting in banding or blotching and thereby immediate recognition of the document as being a copy.
A frequent example of prismatic coloring is on checks where it is combined with other techniques such as the Void Pantograph to increase the difficulty of successful counterfeiting.
Halo
Carefully created images can be hidden in the background or in a picture on a document. These images cannot be seen without the help of an inexpensive lens of a specific line screening. When placed over the location of the image and rotated the image becomes visible. If the document is photocopied the Halo image is lost. A known implementation is Scrambled Indicia.
Halo can be printed on traditional or digital presses. The advantage of traditional presses is that multiple images can be overlaid in the same location and become visible in turn as the lens is rotated.
Halo is used as a technique to authenticate the originality of the document and may be used to verify critical information within the document. For example, the value of a coupon might be encoded as a Halo image that could be verified at the time of redemption or similarly the seat number on a sporting event ticket.
False-positive testing
False-positive testing derives its name because the testing requires both a false and a positive reaction to authenticate a document. The most common instance is the widely available counterfeit detector marker seen in many banks and stores.
Counterfeit detector markers use a chemical interaction with the substrate, usually paper, of a document turning it a particular color. Usually a marker turns newsprint black and leaves currency or specially treated areas on a document clear or gold. The reaction and coloring varies depending upon the formulation. Banknotes, being a specially manufactured substrate, usually behave differently than standard newsprint or other paper and this difference is how counterfeits are detected by the markers.
False-positive testing can also be done on documents other than currencies as a means to test their authenticity. With the stroke of a marker a symbol, word or value can be revealed that will allow the user to quickly verify the document, such as a coupon. In more advanced applications the marker creates a barcode which can be scanned for verification or reference to other data within the document resulting in a higher degree of assurance of authenticity.
Photocopied documents will lack the special characteristics of the substrate so are easily detectable. False-positive testing generally is a one time test because once done the results remain visible so while useful as part of a coupon this technique is not suitable for ID badges for example.
Fluorescent and phosphorescent dyes
Fluorescent dyes react with fluorescence under ultraviolet light or other unusual lighting. These show up as words, patterns or pictures and may be visible or invisible under normal lighting. This feature is also incorporated into many banknotes and other documents - e.g. Northern Ireland NHS prescriptions show a picture of local '8th wonder' the Giant's Causeway in UV light. Some producers include multi-frequency fluorescence, such that different elements fluoresce under specific frequencies of light. Phosphorescence may accompany fluorescence and shows an after-glow when the UV light is switched off.
Registration of features on both sides
Banknotes are typically printed with fine alignment (so-called see-through registration window) between the offset printing on each side of the note. This allows the note to be examined for this feature, and provides opportunities to unambiguously align other features of the note with the printing. Again, this is difficult to imitate accurately enough in most print shops.
Electronic devices
With the advent of Radio Frequency Identification (RFID) which is based on smart card technology, it is possible to insert extremely small RF-active devices into the printed product to enhance document security. This is most apparent in modern biometric passports, where an RFID chip mirrors the printed information. Biometric passports additionally include data for the verification of an individual's fingerprint or face recognition at automated border control gates.
Thermochromatic ink
Security ink with a normal "trigger" temperature of , which will either disappear or change colors when the ink is rubbed, usually by the fingertips.
Latent images
Pressure-sensitive or hot stamped labels characterized with a normal (gray or colored) appearance. When viewed via a special filter (such as a polarizer) an additional, normally latent, image appears. With intaglio printing, a similar effect may be achieved for viewing the banknote from a slanted angle.
Copy detection pattern and digital watermark
A copy detection pattern or a digital watermark can be inserted into a digital image before printing the security document. These security features are designed to be copy-sensitive and authenticated with an imaging device.
See also
Authentication, particularly the subject product authentication
Tamper-evident technology, particularly for money and stamps
Tamper resistance, particularly the subject packaging
Brand protection
Security label
References
External links
The council of the EU: Glossary of Security Documents, Security Features and other related technical terms
EUIPO Anti-Counterfeiting Technology Guide
Documents
Forgery
Packaging
Security
Authentication methods
Engraving
Money forgery
Steganography |
1043642 | https://en.wikipedia.org/wiki/General%20Computer%20Corporation | General Computer Corporation | General Computer Corporation (GCC), later GCC Technologies, was an American printer company formed in 1981 by Doug Macrae, John Tylko, and Kevin Curran. The company began as a video game company and then later changed to make computer peripherals.
History
Video games
They started out making mod-kits for existing arcade games - for example Super Missile Attack, which was sold as an enhancement board to Atari's Missile Command. At first Atari sued, but ultimately dropped the suit and hired GCC to develop games for Atari (and stop making enhancement boards for Atari's games without permission). They created an enhancement kit for Pac-Man called Crazy Otto which they sold to Midway, who in turn sold it as the sequel Ms. Pac-Man; they also developed Jr. Pac-Man, that game's successor.
Under Atari, Inc., GCC made the original arcade games Food Fight, Quantum, and the unreleased Nightmare; developed the Atari 2600 versions of Ms. Pac-Man and Centipede; produced over half of the Atari 5200 cartridges; and developed the chip design for the Atari 7800, plus the first round of cartridges for that base unit.
Peripherals
In 1984, the company changed direction to make peripherals for Macintosh computers: the HyperDrive (the Mac's first internal hard drive), the WideWriter 360 large format inkjet printer, and the Personal Laser Printer (the first QuickDraw laser printer). Presently the company focuses exclusively on laser printers.
HyperDrive was unusual because the original Macintosh did not have any internal interface for hard disks. It was attached directly to the CPU, and ran about seven times faster than Apple's "Hard Disk 20", an external hard disk that attached to the floppy disk port.
The HyperDrive was considered an elite upgrade at the time, though it was hobbled by Apple's Macintosh File System, which had been designed to manage 400K floppy disks; as with other early Macintosh hard disks, the user had to segment the drive such that it appeared to be two or more partitions, called Drawers.
The second issue of MacTech Magazine, in January 1985, included a letter that summed up the excitement:
"The BIG news is from a company called General Computer. They announced a Mac mod called HyperDrive, which is a RAM expansion to 512K, and the installation of a 10 meg hard disk with the controller INSIDE THE MACINTOSH. This allows direct booting from the hard disk, free modem port, no serial I/O to slow things down, and no external box to carry around. Price is $2,795 on a 128K machine or $2195 on a 512K machine. They do the installation or you can buy a kit from your dealer."
In 1986 the company shipped the "HyperDrive 2000", a 20MB internal hard disk that also included a Motorola 68881 floating-point unit, but the speed advantage of the HyperDrive had been negated on the new Macintosh Plus computers by Apple's inclusion of an external SCSI port. General Computer responded with the "HyperDrive FX-20" external SCSI hard disk, but drowned in a sea of competitors that offered fast large hard disks.
General Computer changed its name to GCC Technologies and relocated located in Burlington, Massachusetts. They continued to sell laser printers until 2015, at which point the company was disestablished.
Notable employees
Elizabeth Betty Ryan
Lucy Gilbert
References
External links
GCC corporate homepage
Video: "College Dreams- the story of General Computer" Play Value - ON Networks
Defunct computer companies of the United States
Computer printer companies
Computer peripheral companies
Defunct video game companies of the United States
Video game development companies
Computer companies established in 1981
Computer companies disestablished in 2015
Electronics companies established in 1981
Electronics companies disestablished in 2015 |
25122 | https://en.wikipedia.org/wiki/PowerBook | PowerBook | The PowerBook (known as Macintosh PowerBook before 1997) is a family of Macintosh laptop computers designed, manufactured and sold by Apple Computer, Inc. from 1991 to 2006. During its lifetime, the PowerBook went through several major revisions and redesigns, often being the first to incorporate features that would later become standard in competing laptops. The PowerBook line was targeted at the professional market, and received numerous awards, especially in the second half of its life, such as the 2001 Industrial Design Excellence Awards "Gold" status, and Engadget's 2005 "Laptop of the Year". In 1999, the line was supplemented by the home and education-focused iBook family.
The PowerBook was replaced by the MacBook Pro in 2006 as part of the Mac transition to Intel processors.
680x0-based models
PowerBook 100 series
In October 1991, Apple released the first three PowerBooks: the low-end PowerBook 100, the more powerful PowerBook 140, and the high end PowerBook 170, the only one with an active matrix display. These machines caused a stir in the industry with their compact dark grey cases, built-in trackball, and the innovative positioning of the keyboard that left room for palmrests on either side of the pointing device. Portable PC computers at the time were still oriented toward DOS, and tended to have the keyboard forward towards the user, with empty space behind it that was often used for function key reference cards. In the early days of Microsoft Windows, many notebooks came with a clip on trackball that fit on the edge of the keyboard molding. As usage of DOS gave way to the graphical user interface, the PowerBook's arrangement became the standard layout all future notebook computers would follow.
The PowerBook 140 and 170 were the original PowerBook designs, while the PowerBook 100 was the result of Apple having sent the schematics of the Mac Portable to Sony, who miniaturized the components. Hence the PowerBook 100's design does not match those of the rest of the series, as it was actually designed after the 140 and 170 and further benefited from improvements learned during their development. The PowerBook 100, however, did not sell well until Apple dropped the price substantially.
The 100 series PowerBooks were intended to tie into the rest of the Apple desktop products utilizing the corporate Snow White design language incorporated into all product designs since 1986. Unlike the Macintosh Portable, however, which was essentially a battery-powered desktop in weight and size, the light colors and decorative recessed lines did not seem appropriate for the scaled-down designs. In addition to adopting the darker grey colour scheme that coordinated with the official corporate look, they also adopted a raised series of ridges mimicking the indented lines on the desktops. The innovative look not only unified their entire product line, but set Apple apart in the marketplace. These early series would be the last to utilize the aging Snow White look, with the 190 adopting a new look along with the introduction of the 500 series.
The first series of PowerBooks were hugely successful, capturing 40% of all laptop sales. Despite this, the original team left to work at Compaq, setting back updated versions for some time. When attempting to increase processing power, Apple was hampered by the overheating problems of the 68040; this resulted in the 100-series PowerBook being stuck with the aging 68030, which could not compete with newer-generation Intel 80486-based PC laptops introduced in 1994. For several years, new PowerBook and PowerBook Duo computers were introduced that featured incremental improvements, including color screens, but by mid-decade, most other companies had copied the majority of the PowerBook's features. Apple was unable to ship a 68040-equipped PowerBook until the PowerBook 500 series in 1994.
The original PowerBook 100, 140, and 170 were replaced by the 145 (updated to the 145B in 1993), 160, and 180 in 1992. The 160 and 180 having video output allowing them to drive an external monitor. In addition, the PowerBook 180 had a superb-for-the-time active-matrix grayscale display, making it popular with the Mac press. In 1993, the PowerBook 165c was the first PowerBook with a color screen, later followed by the 180c. In 1994, the last true member of the 100-series form factor introduced was the PowerBook 150, targeted at value-minded consumers and students. The PowerBook 190, released in 1995, bears no resemblance to the rest of the PowerBook 100 series, and is in fact simply a Motorola 68LC040-based version of the PowerBook 5300 (and the last Macintosh model to utilize a Motorola 68k-family processor). Like the 190, however, the 150 also used the 5300 IDE-based logic-board architecture. From the 100's 68000 processor, to the 190's 68LC040 processor, the 100 series PowerBooks span the entire Apple 68K line, with the 190 even upgradable to a PowerPC processor.
PowerBook Duo
In 1992 Apple released a hybrid portable/desktop computer, the PowerBook Duo, continuing to streamline the subnotebook features introduced with the PowerBook 100. The Duos were a series of very thin and lightweight laptops with a minimum of features, which could be inserted into a docking station to provide the system with extra video memory, storage space, connectors, and could be connected to a monitor.
PowerBook 500 series
1994 saw the introduction of the Motorola 68LC040-based PowerBook 500 series, code-named Blackbird. These models of PowerBooks were much sleeker and faster than the 100 series, which they replaced as the mid and high-end models. The 500 series featured DSTN (520) or active-matrix LCD displays (540 and 550), stereo speakers, and was the first computer to use a trackpad (although a similar technology had been included on the pioneering Gavilan SC 11 years earlier); it was also the first portable computer to offer built-in Ethernet networking. The PowerBook 500 series was the mainstay of the product line until the PowerBook 5300. The 500 series was the first PowerBook to feature PCMCIA slots, although this was an optional feature that required the user to sacrifice one of the two available battery slots to house the PCMCIA expansion cage.
The PowerBook 500 series was released as Apple was already moving its desktop machines to the PowerPC processor range, and a future upgrade was promised from the start. This came in 1995, as an Apple Motherboard containing a 100 MHz 603e processor and 8 MB of RAM (which snapped into a slot containing the previous 25 or 33 MHz 68040 processor and the 4 MB of RAM on the previous daughterboard). At the same time Newer Technology offered an Apple-authorized 117 MHz Motherboard, which was more popular than the Apple product, and optionally came without any RAM. The company later offered 167 MHz and 183 MHz upgrades containing more memory and onboard cache memory to improve performance. Nonetheless, the internal architecture of the 500 series meant that the speed increase provided by the 100 and 117 MHz upgrades was, for most users, relatively small.
The 500 series was completely discontinued upon the introduction of its replacement the PPC-based PowerBook 5300, with the PowerBook 190 replacing the 500 as the only 68LC040 PowerBook Apple offered.
PowerPC-based models
The PowerBook 5300, while highly anticipated as one of the first PowerPC-based PowerBooks (along with the PowerBook Duo 2300c, both released on the same day), had numerous problems. In its 5300ce incarnation with a TFT of 800x600 pixels, Apple offered a 117 MHz PPC, 32 MB of on-board RAM, and a hot-swappable drive bay. With all of these features, though, the 5300ce was quite ahead of other laptop models at the time. Multiple problems with reliability, stability and safety (by some, the model was referred as the "HindenBook" because the lithium ion batteries used actually burst into flame in Apple tests, necessitating a recall and downgrade to nickel metal hydride batteries) were present in the early 5300s. After Apple offered an Extended Repair Program, the series turned into a remarkably attractive machine, but never lost its bad reputation. The bad publicity of 5300 series added to the woes of "beleaguered Apple" during the mid-1990s.
Apple recovered from the 5300 debacle in 1996 and 1997 by introducing three new PowerBooks: the PowerBook 1400, intended to replace the 5300 as a general-purpose PowerBook; the PowerBook 2400, intended as a slim, sleek sub-notebook to replace the PowerBook Duo; and the luxury model PowerBook 3400. The PowerBook 1400 and 3400 were the first PowerBooks ever to include an internal CD drive. Late in 1997, the PowerBook 3400 was adapted into the first PowerBook G3, codenamed the Kanga. This series was the last PowerBook model to employ a "real" keyboard with 1 cm high keys; all later models have flat keys.
PowerBook G3
The first PowerBook G3 Series (completely redesigned from the Kanga) was released in 1998, although it was still an Old World ROM Mac.
These new PowerBooks took design cues from the 500 series PowerBook, sporting dramatic curves and a jet-black plastic case. They were so fashionable that various G3 models became the personal computer of Carrie Bradshaw in the long-running Sex and the City television show. Debuting at roughly the same time as the G3 iMac, the "WallStreet/Mainstreet" series composed of models with varying features, such as different processing speeds (from 233 to 300 MHz) and the choice of 12-, 13-, or 14-inch screens. They all included dual drive bays capable of accommodating floppy drives, CD-ROM/DVD-ROM drives, hard drives, or even extra batteries. A second PowerBook G3 Series code-named "PDQ" was introduced later in 1998, with minor changes in configuration options, notably the inclusion of L2 cache in even the lowest-priced 233 MHz model, which helped overall performance.
Apple introduced two later G3 PowerBook models, similar in appearance (curved, black plastic case with black rubberized sections) but thinner, lighter and with revised internal systems. The "Lombard" appeared in 1999, (AKA: Bronze Keyboard) a thinner, lighter, and faster (333 or 400 MHz) PowerBook with a longer battery life and had both USB and SCSI built in and was a New World ROM Mac, and then the "Pismo" in 2000, which replaced the single SCSI port with two FireWire ports, updated the PowerBook line to AGP graphics, a 100 MHz bus speed, and DVD-ROM optical drives standard, in addition to dropping the "G3" from the PowerBook name. The Pismo revision also brought AirPort wireless networking capability (802.11b), which had debuted in Apple's iBook in July 1999. CPU upgrade cards are available for both Lombard and Pismo models.
PowerBook G4
Interim CEO Steve Jobs turned his eye to the redesign of the PowerBook series in 2000. The result, introduced in January 2001, was a completely re-designed New World PowerBook with a titanium skin and a 15.2-inch wide-aspect screen suitable for watching widescreen movies. Built with the PowerPC G4 processor, it was billed as "the first supercomputer you can actually take with you on an airplane." It was lighter than most PC based laptops, and due to the low power consumption of the PowerPC it outlasted them by hours.
The TiBooks, as they were nicknamed, became a fashion item. They were especially popular in the entertainment business, where they adorned many desks in Hollywood motion pictures. Because of their large screens and high performance, Titanium Powerbooks were the first laptops to be widely deployed as desktop replacement computers.
The industrial design of the notebooks quickly became a standard that others in the industry would follow, creating a new wave of wide-screened notebook computers.
The Titanium PowerBooks were released in configurations of 400 MHz, 500 MHz, 550 MHz, 667 MHz, 800 MHz, 867 MHz, and 1 GHz. They are the last PowerBooks able to boot Mac OS 9.
In 2003, Apple launched both the largest-screen laptop in the world and Apple's smallest full-featured notebook computer. Both machines were made of anodized aluminum (coining the new nickname AlBook), featured DVD-burning capabilities, AirPort Extreme networking, Bluetooth, and 12.1-inch or 17-inch LCD displays. The 17-inch model included a fiber optic-illuminated keyboard, which eventually became standard on all 15-inch and 17-inch PowerBooks. Two ambient light sensors, located under each speaker grille, adjusted the brightness of the backlit keyboard and the display according to the light level.
The 12-inch PowerBook's screen did not use the same panel as that used on the 12-inch iBook, while the 17-inch PowerBook used the same screen as that used on the 17-inch flat-panel iMac, but with a thinner backlight.
Later in 2003, the 15-inch PowerBooks were redesigned and featured the same aluminum body style as their smaller and larger siblings, and with the same feature set as the 17-inch model (including the backlit keyboard). This basic design would carry through the transition to the Intel-based MacBook Pro, lasting until late 2008.
In April 2004, the aluminum PowerBooks were upgraded. The SuperDrive was upgraded to 4× burning speed for DVDs, the fastest processor available was upgraded to 1.5 GHz, and the graphics cards were replaced with newer models, offering up to 128 MB of video memory. A third built-in speaker was added to the 12-inch model for improved midrange sound. In addition, AirPort Extreme cards became standard for all PowerBooks instead of being offered as an add-on option.
In January 2005, the specifications of the aluminum PowerBooks were revised once more to accompany a price decrease. Processor speeds were increased to a maximum of 1.67 GHz on the higher specification 15-inch and all 17-inch versions, while the lower specification 15-inch model and the 12-inch unit saw an increase in speed to 1.5 GHz. Optical audio output was added to the 17-inch version. Memory and hard drive defaults were increased to 512 MB and 5400 rpm, respectively, with a new storage maximum of 100 GB on the 17-inch model. Each model also received an enhanced trackpad with scrolling capabilities, a revised Bluetooth module supporting BT 2.0+EDR, and a new feature that parks the drive heads when sudden motion is detected by an internal sensor. Support for the 30-inch Apple Cinema display was also introduced in the new 17-inch model and was optional in the 15-inch model via a build-to-order upgrade to the computer's video hardware. The SuperDrive now included DVD+R capability.
In October 2005, the two higher-end PowerBooks were upgraded once again, with higher-resolution displays (1440 × 960 pixels on the 15-inch model, and 1680 × 1050 pixels on the 17-inch model) and faster 533 MHz DDR2 (PC2-4200) memory. The SuperDrive became standard equipment and included support for dual-layer DVDs on the 15- and 17-inch models. The 17-inch model was updated with a 120 GB standard hard drive, as well as a 7200 rpm, 100 GB build-to-order option. These drives were also options on the 15-inch PowerBook. The 12-inch model with SuperDrive remained unchanged in this respect, although each new PowerBook boasted a longer battery life.
Battery recall
On May 20, 2005, Apple and the Consumer Product Safety Commission announced the recall of some Apple PowerBook G4 batteries. The joint Apple/CPSC press release stated that an internal short could cause the battery cells to overheat, posing a fire hazard. Approximately 128,000 defective units were sold.
Though the problems first appeared to be solved, they continued for many users. In early August 2006, Engadget reported that a PowerBook had "violently exploded" because of faulty battery. On August 24, 2006, Apple and the CPSC announced an additional recall of more batteries for the same PowerBook models.
About 1.1 million battery packs in the United States were recalled; an additional 700,000 were sold outside the U.S.
These batteries were manufactured by Sony; Dell, Toshiba, Lenovo, HP, Fujitsu and Acer laptops were also affected by the defective batteries.
Discontinuation
At the 2006 Macworld Conference & Expo, the MacBook Pro was introduced. The new notebooks, however, only came in 15.4-inch models and the 12-inch and 17-inch PowerBooks remained available for sale at Apple stores and retailers, as well as the 15-inch model, which was sold until supplies ran out. On April 24, 2006 the 17-inch PowerBook G4 was replaced by a 17-inch MacBook Pro variant. The 12-inch PowerBook G4 remained available until May 16, 2006, when the MacBook was introduced as a replacement for the iBook. Because of its availability in highly powerful configurations, it was also considered a replacement for the 12-inch PowerBook, ending the nearly 15-year production of PowerBook-branded computers.
Traditionally, the portable line trailed the desktops in the utilization of the latest processors, with the notable exception of the PowerBook G3, which was released simultaneously with the desktop Power Macintosh G3. PowerBooks would continue to trail behind the desktop Macs, however, never even adopting the G5 processor. This was due primarily to the extreme heat caused by most of the full-sized processors available and unacceptable power consumption. With the introduction of the Intel-based Macs, once again, the MacBook Pro joined the iMac in sharing the new technology simultaneously.
See also
IBM RS/6000 laptops and IBM ThinkPad 800 series — another based on a PowerPC CPUs laptops.
iBook
MacBook
References
External links
Apple's PowerBook specifications - Specifications for G3 and later PowerBooks.
Apple-History
the greatest powerbook collection
Apple press release announcing January 2005 PowerBook revisions
PowerPC Macintosh computers
Computer-related introductions in 1991
Discontinued Apple Inc. products |
6068580 | https://en.wikipedia.org/wiki/Siag%20Office | Siag Office | Siag Office is a tightly integrated free software office package for Unix-like operating systems. It consists of the spreadsheet SIAG ("Scheme In A Grid"), the word processor Pathetic Writer (PW), the animation program Egon Animator, the text editor XedPlus, the file manager Xfiler and the previewer Gvu.
Siag Office is known to be extremely light-weight, hence able to run on very old systems reasonably well, such as on i486 computers with 16MB RAM. Because it is kept light-weight, the software lacks many of the features of major office suites, like LibreOffice, Calligra Suite, or Microsoft Office. Siag Office is distributed under the terms of the GPL-2.0-or-later license.
Version 3.6.0 was released in 2003, and the latest version 3.6.1 was released in 2006.
Siag Office is included in Damn Small Linux, a lightweight Linux distribution.
Components
Siag
Siag is the spreadsheet based on the X Window System and the Scheme programming language (specifically using home-grown variant SIOD ("Scheme in One Defun")). The program has existed in several incarnations: text-based curses for SunOS, text-based hardcoded VT52 for Atari TOS, GEM-based for Atari, Turbo C for DOS, Xlib-based for Linux and now Xt-based for POSIX-compliant systems.
It supports import of CSV, Lotus 1-2-3 (.wk1), Scheme Code (.scm), ABScript (.abs), Siag (.siag) native format and partially also XLS files (very limited support) and OpenOffice.org XML (.sxc). It can export files to CSV, TXT, Postscript (.ps), HTML, Lotus 1-2-3 (.wk1), Troff table (.tbl), Latex table (.tex), PDF and its native Siag format.
PW
PW (Pathetic Writer) is an X-based word processor for Unix. Support for RTF (Rich Text Format) allows documents to be exchanged between Pathetic Writer and legacy Windows applications. External converters such as Caolan McNamara's wv can be used to read virtually any format, including Microsoft Word. HTML pages can be loaded and saved, making it possible to instantly publish PW documents on the web.
Egon Animator
Egon Animator is the X-based animation development tool for Unix. The idea is that "objects" (rectangles, lines, pixmaps and so on) are added to a "stage" where they are then made to perform by telling them where they should be and when. It can also edit MagicPoint files.
See also
Comparison of office suites
References
Review
Siag Office is far from pathetic, Linux.com, 2007
External links
Open-source office suites
Free 2D animation software
Office suites |
581581 | https://en.wikipedia.org/wiki/I%20Think%20We%27re%20All%20Bozos%20on%20This%20Bus | I Think We're All Bozos on This Bus | I Think We're All Bozos on This Bus is the fourth comedy album made by the Firesign Theatre for Columbia Records, released in August 1971. In addition to standard stereo formats, the album was released as a Quadraphonic LP and Quadraphonic 8-Track. It was nominated for a Hugo Award for Best Dramatic Presentation in 1972 by the World Science Fiction Society.
Plot
This album, like its predecessor Don't Crush That Dwarf, Hand Me the Pliers, is one complete narrative that covers both sides of one LP. The first LP side is 20 minutes 51 seconds, and the second side is 18 minutes 7 seconds.
Side one starts with an audio segue from the end of Don't Crush That Dwarf, Hand Me the Pliers: the music box tune played by the ice cream truck chased by George Tirebiter is heard approaching, played this time by a bus announcing a free Future Fair, which it touts as "a fair for all, and no fare to anybody". A trio of computer-generated holograms pop up outside the bus: the Whispering Squash (Phil Austin), the Lonesome Beet (David Ossman), and Artie Choke (Peter Bergman), singing "We're back from the shadows again" to the tune of Gene Autry's "Back in the Saddle Again". They encourage the onlookers to attend the fair, which the Beet describes as "technical stimulation" and "government-inflicted simulation". Then they disappear "back to the shadows again". A young man named Clem (Philip Proctor) boards and takes a seat next to Barney (Austin), an older man who identifies himself as a bozo (person with a large nose which honks when squeezed); he says, "I think we're all bozos on this bus." After a stewardess tells the passengers to prepare for "a period of simulated exhilaration", broadcaster Floyd Dan (Bergman) tells them they are riding the rim of the Grand Canyon, the floor of which is five thousand feet below. The "bus" is apparently some sort of hybrid vehicle that can travel on the ground, yet turn into a jet plane which takes off for a "flight to the future".
As Clem and Barney disembark at the fair, a public address announcer directs all bozos to report for cloning, so Barney leaves Clem. The Lonesome Beet pops up and recommends Clem visit the Wall of Science. He boards a moving walkway taking him to the exhibit, which opens with a parody of religious creation myth and seques into a brief overview of history from ancient times to the emergence of mankind, then to the modern scientific era. Two scientific discoveries are reenacted: Fudd's First Law of Opposition ("If you push something hard enough, it will fall over"), and Teslicle's Deviant ("What comes in, must go out"). Then recordings of selected audience members' reactions to the future are played.
Next, the Honorable Chester Cadaver (Ossman) addresses the audience, and relates a meeting with Senator Clive Brown (Bergman), who demonstrates a "model government" consisting of a model train-sized automated maze of bureaucracies which terminates with an animatronic President as the output bus, whom Brown says everyone asks questions. When Cadaver asks Clem to state his name, he responds "Uh, Clem", and the central computer permanently identifies him as "Ah clem". As side 1 closes, Clem is directed onto another moving walkway which takes him in to see the President.
On side two, we meet the President (Austin impersonating Richard Nixon). An African American welfare recipient named Jim (Bergman) relates the harsh urban conditions he and his wife live in and asks the President where he can get a job. The President responds with vague, positive-sounding replies only remotely related to the questions and completely unrelated to the citizens' concerns. Barney (or rather, his clone) is next in line, but is given the bum's rush without the chance to ask his question. Then it is Clem's turn; he puts the President into maintenance mode by saying, "This is Worker speaking. Hello." The computer responds with the length of time that it has been running. Clem then attempts to get access to Doctor Memory (the master control), and confuse the system with a riddle: "Why does the porridge bird lay his egg in the air?" This causes the President to shut itself down.
Clem meets up with Barney back on the Funway. They encounter sideshows such as astronaut Mark Time (Ossman) recruiting a crew for a trip to the Haunted Space Station, and Hideo Nutt's Bolt-a-drome, where fairgoers are invited to participate in boxing matches with electrical appliances such as water heaters and toasters. Public announcers repeatedly page Clem to come to the "hospitality shelter", and Artie Choke pops up again, programmed to take lost children back to their parents. He says he will send Deputy Dan to take Clem to the hospitality shelter. Clem then uses Artie to create a clone of himself which enters the system for another confrontation with Dr. Memory. He repeats his porridge bird riddle, which the computer struggles with several attempts to parse, finally mangling it into "Why does the poor rich Barney delay laser's edge in the fair?" Clem succeeds in confusing the computer into contradicting itself, causing a total crash which ends the fair with a display of fireworks.
The entire experience is then revealed to be a vision seen in the crystal ball of a Gypsy doctor (Proctor) telling Barney his fortune. After Barney leaves, the Gypsy plots with his partner (Bergman) to make a quick escape after their last client, a sailor.
Portrayal of theme parks and computer technology
The fair rides and exhibits are similar to those at Disneyland and the 1964 New York World's Fair.
Clem is one of the first "computer hackers" mentioned in pop culture, and his dialogue with the fair's computer includes messages found in the DEC PDP-10, a popular minicomputer at the time. (Some of the lines are error messages from MACLISP.) An identification followed by the word "hello" initiated an interactive session on contemporary Univac, General Electric, and university timesharing systems. Many of the things the computer said were based on ELIZA, a computer program which simulated a Rogerian psychotherapist. For example, the phrase Clem used to put The President into maintenance mode, "this is Worker speaking," is based on the fact that the user could type "worker" at Eliza's command prompt, and Eliza would then display the command prompt for the Lisp software environment in which Eliza ran. And if the user neglected to end a statement or question to Eliza with a punctuation mark, Eliza's parser would fail, displaying the message "Unhappy: MkNam" to indicate that a function called "MkNam" was failing. The President said the same thing, pronouncing it "unhappy macnam."
Award nominations
The album was nominated by the World Science Fiction Society in 1972 for the Hugo Award for Best Dramatic Presentation.
Film
This album inspired Ivan Stang's 1973 film Let's Visit the World of the Future.
Cultural influence
Apple's Siri used to respond to "This is worker speaking. Hello." with "Hello, Ah-Clem. What function can I perform for you? LOL."
Issues and reissues
This album was originally released simultaneously on LP, Cassette, SQ Quad LP, and Quad 8-Track.
LP - Columbia C-30737
Cassette - Columbia CA-30737
Quad LP - Columbia CQ-30737
Quad 8 Track - Columbia CAQ - 30737
It has been re-released on CD at least three times:
1989 - Mobile Fidelity MFCD-785
2001 - CBS/Epic
2001 - Laugh.com LGH1073
See also
Futurama (New York World's Fair)
References
Firesign Theatre. I Think We're All Bozos on This Bus. Columbia Records, 1971.
Firesign Theatre. I Think We're All Bozos on This Bus. Mobile Fidelity, 1989.
Firesign Media: I Think We're All Bozos On This Bus.
"FIREZINE: Linques!." Firesign Theatre FAQ. 20 Jan. 2006 <>.
Marsh, Dave, and Greil Marcus. "The Firesign Theatre." The New Rolling Stone Record Guide. Ed. Dave Marsh and John Swenson. New York: Random House, 1983. 175-176.
Smith, Ronald L. The Goldmine Comedy Record Price Guide. Iola: Krause, 1996.
1971 albums
The Firesign Theatre albums
Columbia Records albums
CBS Records albums
Epic Records albums
Science fiction comedy
1970s comedy albums |
43639016 | https://en.wikipedia.org/wiki/Ali%20Partovi | Ali Partovi | Ali Partovi (; born 1972) is an Iranian-American entrepreneur and angel investor. He is best known as a co-founder of Code.org (which he founded with his twin brother Hadi), iLike, LinkExchange, an early advisor at Dropbox, and an early promoter of bid-based search advertising. Partovi currently serves on the board of directors at FoodCorps. He is currently the CEO of Neo, a mentorship community and venture fund he established in 2017.
Early life and education
Ali Partovi was born alongside his twin brother Hadi Partovi amid the White Revolution of Iran and the Iran-Iraq war. Both his parents were intellectuals. His mother studied Computer Science in Boston, and his father Firouz Partovi was a University Professor in the faculty of Physics. His cousins include Dara Khosrowshahi, Amir Khosrowshahi (co-founder of Nervana Systems), and Farzad "Fuzzy" Khosrowshahi (co-founder XL2Web which was acquired to become Google Sheets). Ali and Hadi began coding when they were ten on a Commodore 64 their father had brought from a seminar he had attended abroad. The family fled Iran during the Iranian Revolution. They moved to the United States, where he studied Computer Science, acquiring both a Master’s and a Bachelor’s Degree from Harvard University.
Career
Oracle
Ali Partovi worked as a Software Consultant at Oracle from August 1994 to April 1996. He worked on the Interactive Tv project as a field engineer, helping deploy trials for telco and cable companies.
LinkExchange
Ali Partovi joined Tony Hsieh and Sanjay Mandan in 1996 to co-partner in the establishment of the internet company LinkExchange. The three were later joined by Alfred Lin who served as CFO. Ali says that he was recruited for his computer programming skills and his business management skills. He worked in sales, marketing, finance, and business development until Microsoft acquired the company in 1998 for $265 million. At the time, LinkExchange reached 400,000 sites and about 21 million consumers.
Partovi was one of the first people to recognize the paid search opportunity because he saw how badly small business owners wanted their businesses to show up on search results. In 1998, LinkExchange acquired Submit It!, started by college-dropout Scott Bannister, which helped owners submit URLs to search engines. After Microsoft acquired LinkExchange, Partovi stayed with them and became the Lead Project Manager for MSN Keywords. However, executives in Microsoft, Yahoo, Excite, and other search companies had their hopes pinned in Banner Ads. When Microsoft viewed MSN Keywords as a threat to Banner Ads and shut it down in 2000, Partovi left.
iLike
This online platform, meant to help users discover new artists spawned out of GarageBand. GarageBand was established in 1999 as a site where an independent artist could post their music, and other users would discover them. Ali had bought the assets of GarageBand in 2002 and had saved it from bankruptcy. When he and his brother attempted to re-invent the company, iLike was founded in 2006. Ali became CEO, and Hadi became President.
The service made use of a sidebar which made it easy for users to discover new artists. It became a massive success within the first few months of launching. Users could directly register on the platform or use third-party networks such as Facebook. iLike had a "post-once publish-everywhere" dashboard for artists.
iLike had raised funding pre-launch at a $50 million valuation. After 7 or 8 years, iLike was acquired by Myspace for $20 million.
Initially, Apple was interested in purchasing iLike because Jobs was impressed by the product and the team. However, Partovi cost the acquisition by telling Steve Jobs that they were worth three times as much as Jobs was offering ($50 million). When iLike did not have a competing offer, Jobs ended negotiations. Partovi refers to the Steve Jobs encounter as one of his "most painful memories".
Code.org
Ali and Hadi created Code.org in 2013 as a non-profit initiative to promote computer science, and the two brothers funded the initiative. They believe that everyone in the world should be able to read and write code, yet many American public schools don't offer computer science classes. Ali and Hadi launched a short video featuring Mark Zuckerberg, Bill Gates, Jack Dorsey, and others to inspire kids to learn how to code. This video garnered over 15 million views on Youtube. Ali also helped establish Hour of Code, a tutorial that introduces students to programming.
Angel Investing
From 1998 to 2017, Ali has backed major tech companies, including Facebook, Airbnb, Dropbox, Uber, and Zappos. Ali and his twin brother Hadi have been identified as the most prominent angel investors, with their portfolios having a very high number of now successful companies. They state that they fund people based on their tech talent. However, Ali stopped angel investing in 2017.
Although Partovi has backed many tech startups, in the beginning, he was afraid of investing in ventures he thought were terrible ideas. However, many of these companies turned out to be successful. One of such companies is Google. He was afraid of betting on a search engine since there was too much competition. Another missed opportunity was PayPal. Ali says, “I wish at that time someone had told me that, like, if one of the smartest people you know starts a company, just don't ask questions. Figure out how to invest in it.”
Once Ali was having lunch with a Wall Street journalist and Brian Chesky was at the table next to them. Brian propositioned Airbnb because he overheard that Ali was an angel investor, but Ali didn't follow up. This caused Ali to miss out on Airbnb as an early investor, but he was able to invest later at a higher valuation.
Neo
In 2017, Ali Partovi founded Neo, a community of mentors meant to accelerate the development of leadership in the tech environment. Ali stopped Angel Investing around this time as this new company includes a venture fund. The company identifies top Computer Science students and accelerates their careers by introducing them and investing in their startups. It has $200 million in venture funds, over 1200 introductions to startups, 539 community members, and 57 portfolio companies.
The idea was born out of the pro-sports scouting premises. Ali observed how basketball agents and coaches scouted for the best players in varsities and high schools. In 2016, he had a conversation with Stephen Curry, who was in the Warriors then, and the idea of scouting tech engineers ceased being just a theory. He would identify the most brilliant engineers, recruit them, and introduce them to the mentorship community, made up of veteran technologists.
In one recorded session with recruits, Ali encourages them to avoid not taking risks. He illustrates the model he uses to determine the expected value by plotting the probability of success against the expected value. He states that the size of the reward matters than the probability of success. Reducing the risk may draw out the outcome of an identified opportunity, only resulting in a small success and a waste of a lot of resources and time. Spectacular failure or spectacular success constitutes good outcomes for Ali.
Writing
Between 2000 and 2001, Ali wrote a screenplay with his friend Alan Shusterman. They traveled to LA to make connections, but had their luggage and laptops stolen instead.
In 2020, Ali wrote an article on Immigration policies. He highlights his experiences in America as an Iranian immigrant and proposes changes on immigration policies that he believes are detrimental to even America itself.
In 2021, Ali published an article in which he highlights the challenges that startups face when they are presented with acquisition deals. The article warns startup CEOs against taking hype too far.
Personal life
Ali grew up playing the piano with his brother, and has since been shown to be an avid musician. He has also been shown to enjoy rock climbing and going to the gym.
As an immigrant himself, he is passionate about legislation affecting immigrant populations, including foreign and educational policies. He has claimed to have faced problems obtaining travel documents since his first job due to his Iranian heritage. He has spoken out about the restrictive consequences of the recent immigration bills passed in the United States. In his article, 'Immigrants are Humans,' Ali states how he and other immigrants had been deported as 12-year-olds and how such policies do not help anybody and hurt America.
As of 2019, Ali has four children from two marriages.
References
American computer businesspeople
American people of Iranian descent
Living people
People from Tehran
Harvard University alumni
American company founders
1972 births
21st-century American businesspeople |
11209738 | https://en.wikipedia.org/wiki/Internet%20censorship%20in%20the%20United%20States | Internet censorship in the United States | Internet censorship in the United States is the suppression of information published or viewed on the Internet in the United States. The First Amendment of the United States Constitution protects freedom of speech and expression against federal, state, and local government censorship.
In 2014, the United States was added to Reporters Without Borders (RWB)'s list of "Enemies of the Internet", a group of countries with the highest level of Internet censorship and surveillance. RWB stated that the U.S. has "undermined confidence in the Internet and its own standards of security" and that "U.S. surveillance practices and decryption activities are a direct threat to investigative journalists, especially those who work with sensitive sources for whom confidentiality is paramount and who are already under pressure."
In U.S. government-funded Freedom House's Freedom On the Net 2017 report covering the period from June 2016 to May 2017, the United States was rated the fifth most free of the 65 countries rated.
Overview
The strong protections for freedom of speech and expression against federal, state, and local government censorship are rooted in the First Amendment of the United States Constitution. These protections extend to the Internet and as a result very little government mandated technical filtering occurs in the US. Nevertheless, the Internet in the United States is highly regulated, supported by a complex set of legally binding and privately mediated mechanisms.
After more than two decades of ongoing contentious debate over content regulation, the country is still very far from reaching political consensus on the acceptable limits of free speech and the best means of protecting minors and policing illegal activity on the Internet. Gambling, cyber security, and dangers to children who frequent social networking sites are important ongoing debates. Significant public resistance to proposed content restriction policies have prevented the more extreme measures used in some other countries from taking hold in the U.S.
Public dialogue, legislative debate, and judicial review have produced filtering strategies in the United States that are different from those found in most of the rest of the world. Many government-mandated attempts to regulate content have been barred on First Amendment grounds, often after lengthy legal battles. However, the government has been able to exert pressure indirectly where it cannot directly censor. With the exception of child pornography, content restrictions tend to rely more on the removal of content than blocking; most often these controls rely upon the involvement of private parties, backed by state encouragement or the threat of legal action. In contrast to much of the rest of the world, where ISPs are subject to state mandates, most content regulation in the United States occurs at the private or voluntary level.
The first wave of regulatory actions in the 1990s in the United States came about in response to the profusion of sexually explicit material on the Internet within easy reach of minors. Since that time, several legislative attempts at creating a mandatory system of content controls in the United States have failed to produce a comprehensive solution for those pushing for tighter controls. At the same time, the legislative attempts to control the distribution of socially objectionable material on the Internet in the United States have given rise to a robust system that limits liability over content for Internet intermediaries such as Internet service providers (ISPs) and content hosting companies.
Proponents of protecting intellectual property online in the United States have been much more successful, producing a system to remove infringing materials that many feel errs on the side of inhibiting legally protected speech. The US practices forceful seizures of domains and computers, at times without notification, causing the websites to be unable to continue operating. Some high-profile cases are Napster, WikiLeaks, The Pirate Bay, and MegaUpload.
National security concerns have spurred efforts to expand surveillance of digital communications and fueled proposals for making Internet communication more traceable.
Federal laws
With a few exceptions, the free speech provisions of the First Amendment bar federal, state, and local governments from directly censoring the Internet. The primary exception has to do with obscenity, including child pornography, which does not enjoy First Amendment protection.
Computer Fraud and Abuse Act (CFAA)
The Computer Fraud and Abuse Act (CFAA) was enacted in 1986 as an amendment to an existing computer fraud law (), which was part of the Comprehensive Crime Control Act of 1984. The CFAA prohibits accessing a computer without authorization, or in excess of authorization. Since 1986, the Act has been amended a number of times—in 1989, 1994, 1996, in 2001 by the USA PATRIOT Act, 2002, and in 2008 by the Identity Theft Enforcement and Restitution Act. The CFAA is both a criminal law and a statute that creates a private right of action, allowing private individuals and companies to sue to recover damages caused by violations of this law.
Provisions of the CFAA effectively make it a federal crime to violate the terms of service of Internet sites, allowing companies to forbid legitimate activities such as research, or limit or remove protections found elsewhere in law. Terms of service can be changed at any time without notifying users. Tim Wu called the CFAA "the worst law in technology".
Aggressive prosecution under the Computer Fraud and Abuse Act (CFAA) has fueled growing criticism of the law's scope and application. In 2013 a bipartisan group of lawmakers introduced legislation (, ) that would prevent the government from using CFAA to prosecute terms of service violations and stop prosecutors from bringing multiple redundant charges for a single crime. The bill was reintroduced in 2015 (, ), but did not garner enough support to move forward.
Communications Decency Act (CDA)
In 1996, the United States enacted the Communications Decency Act (CDA), which attempted to regulate both indecency (when available to children) and obscenity in cyberspace. In 1997, in the case of Reno v. ACLU, the United States Supreme Court found the anti-indecency provisions of the Act unconstitutional. Writing for the Court, Justice John Paul Stevens held that "the CDA places an unacceptably heavy burden on protected speech".
Section 230 is a separate portion of the CDA that remains in effect. Section 230 says that operators of Internet services are not legally liable for the words of third parties who use their services and also protects ISPs from liability for good faith voluntary actions taken to restrict access to certain offensive materials or giving others the technical means to restrict access to that material.
Child Online Protection Act (COPA)
In 1998, the United States enacted the Child Online Protection Act (COPA) to restrict access by minors to any material defined as harmful to such minors on the Internet. The law was found to be unconstitutional because it would hinder protected speech among adults. It never took effect, as three separate rounds of litigation led to a permanent injunction against the law in 2009. Had the law passed, it would have effectively made it an illegal act to post anything commercial based to the internet that is knowingly harmful to children without some sort of vetting program to confirm the users age.
Digital Millennium Copyright Act (DMCA)
Signed into law in 1998, the Digital Millennium Copyright Act (DMCA, ) criminalizes the production and dissemination of technology that could be used to circumvent copyright protection mechanisms and makes it easier to act against alleged copyright infringement on the Internet. The Online Copyright Infringement Liability Limitation Act (OCILLA) is included as Title II of the DMCA and limits the liability of the on-line service providers for copyright infringement by their users.
Children's Online Privacy Protection Act (COPPA)
The Children's Online Privacy Protection Act (COPPA) went into effect on 21 April 2000. It applies to the online collection of personal information by persons or entities under U.S. jurisdiction from children under 13 years of age and details what a website operator must include in a privacy policy, when and how to seek verifiable consent from a parent or guardian, and what responsibilities an operator has to protect children's privacy and safety online including restrictions on the marketing to those under 13. While children under 13 can legally give out personal information with their parents' permission, many websites disallow underage children from using their services altogether, due to the cost and amount of paperwork necessary for compliance.
Children's Internet Protection Act (CIPA)
In 2000 the Children's Internet Protection Act (CIPA) was signed into law.
CIPA requires K-12 schools and libraries receiving federal Universal Service Fund (E-rate) discounts or LSTA grants for Internet access or internal connections to:
adopt and implement an Internet safety policy addressing: (a) access by minors to inappropriate matter on the Internet; (b) the safety and security of minors when using electronic mail, chat rooms, and other forms of direct electronic communications; (c) unauthorized access, including so-called "hacking," and other unlawful activities by minors online; (d) unauthorized disclosure, use, and dissemination of personal information regarding minors; and (e) measures restricting minors' access to materials harmful to them;
install internet filters or blocking software that prevents access to pictures that are: (a) obscene, (b) child pornography, or (c) harmful to minors (for computers that are accessed by minors);
to allow the filtering or blocking to be disabled upon the request of an adult; and
adopt and enforce a policy to monitor the online activities of minors.
CIPA does not:
require the tracking of Internet use by minors or adults; or
affect E-rate funding for schools and libraries receiving discounts for telecommunications services, such as telephone service, but not for Internet access or internal connections.
Trading with the Enemy Act (TWEA)
In March 2008, the New York Times reported that a blocklist published by the Office of Foreign Assets Control (OFAC), an agency established under the Trading with the Enemy Act 1917 and other federal legislation, included a number of websites, so that US companies are prohibited from doing business with those websites and must freeze their assets. The blocklist has the effect that domain name registrars based in the US must block those websites. According to the New York Times, eNom, a private domain name registrar and Web hosting company operating in the US, disables domain names which appear on the blocklist. It describes eNom's disabling of a European travel agent's web sites advertising travel to Cuba, which appeared on the list published by OFAC. According to the report, the US government claimed that eNom was "legally required" to block the websites under US law, even though the websites were not hosted in the US, were not targeted at US persons and were legal under foreign law.
Cybersecurity Information Sharing Act (CISA)
The Cybersecurity Information Sharing Act (CISA) is designed to "improve cybersecurity in the United States through enhanced sharing of information about cybersecurity threats, and for other purposes". The law allows the sharing of Internet traffic information between the U.S. government and technology and manufacturing companies. The text of the bill was incorporated by amendment into a consolidated spending bill in the U.S. House on December 15, 2015, which was signed into law by President Barack Obama on December 18, 2015.
Opponents question the CISA's value, believing it will move responsibility from private business to the government, thereby increasing vulnerability of personal private information, as well as dispersing personal private information across seven government agencies, including the NSA and local police. Some felt that the act was more amenable to surveillance than actual security after many of the privacy protections from the original bill were removed.
Stop Advertising Victims of Exploitation Act of 2015 (SAVE)
The Stop Advertising Victims of Exploitation Act of 2015 (SAVE) is part of the larger Justice for Victims of Trafficking Act of 2015 which became law in May 2015. The SAVE Act makes it illegal to knowingly advertise content related to sex trafficking, including online advertising. The law establishes federal criminal liability for third-party content. There is a concern that this will lead companies to over-censor content rather than face criminal penalties, or to limit the practice of monitoring content altogether so as to avoid "knowledge" of illegal content.
Americans with Disabilities Act (ADA)
In 2016, complainants from Gallaudet University brought a lawsuit against UC Berkeley for not adding closed captioning to the recorded lectures it made free to the public. In what many commentators called an unintended consequence of the Americans with Disabilities Act of 1990, the Department of Justice ruling resulted in Berkeley deleting 20,000 of the freely licensed videos instead of making them more accessible.
Allow States and Victims to Fight Online Sex Trafficking Act - Stop Enabling Sex Traffickers Act (FOSTA-SESTA)
Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) is a bill introduced in the U.S. House of Representative by Ann Wagner in April 2017. Stop Enabling Sex Traffickers Act (SESTA) is a similar U.S. Senate bill introduced by Rob Portman in August 2017. The combined FOSTA-SESTA package passed the House on February 27, 2018 with a vote of 388-25 and the Senate on March 21, 2018 with a vote of 97–2. The bill was signed into law by Donald Trump on April 11, 2018.
The bill amended Section 230 of the Communications Decency Act to exclude the enforcement of federal and state sex trafficking laws from immunity, and clarified the Stop Advertising Victims of Exploitation Act to define participation in a venture as knowingly assisting, facilitating, or supporting sex trafficking.
The bills were criticized by pro-free speech and pro-Internet groups as a "disguised internet censorship bill" that weakens the section 230 safe harbors, places unnecessary burdens on internet companies and intermediaries that handle user-generated content or communications with service providers required to proactively take action against sex trafficking activities, and requiring a "team of lawyers" to evaluate all possible scenarios under state and federal law (which may be financially unfeasible for smaller companies). Online sex workers argued that the bill would harm their safety, as the platforms they utilize for offering and discussing sexual services (as an alternative to street prostitution) had begun to reduce their services or shut down entirely due to the threat of liability under the bill.
Proposed federal legislation that has not become law
Deleting Online Predators Act (DOPA)
The Deleting Online Predators Act of 2006 was introduced, but did not become law. Two similar bills were introduced in 2007, but neither became law.
The proposed legislation would have required schools, some businesses, and libraries to block minors' access to social networking websites. The bill was controversial because, according to its critics, it would limit access to a wide range of websites, including many with harmless and educational material.
Protecting Cyberspace as a National Asset Act (PCNAA)
The Protecting Cyberspace as a National Asset Act was introduced in 2010, but did not become law.
The proposed Act caused controversy for what critics perceived as its authorization for the U.S. president to apply a full block of the Internet in the U.S.
A new bill, the Executive Cyberspace Coordination Act of 2011, was under consideration by the U.S. Congress in 2011. The new bill addresses many of the same issues as, but takes quite a different approach from the Protecting Cyberspace as a National Asset Act.
Combating Online Infringement and Counterfeits Act (COICA)
The Combating Online Infringement and Counterfeits Act was introduced in September 2010, but did not become law.
The proposed Act would have allowed the U.S. Attorney General to bring an in rem action against an infringing domain name in United States District Court, and seek an order requesting injunctive relief. If granted, such an order would compel the registrar of the domain name in question to suspend operation of, and may lock, the domain name.
The U.S. Justice Department would maintain two publicly available lists of domain names. The first list would contain domain names against which the Attorney General has obtained injunctions. The second list would contain domains alleged by the Justice Department to be infringing, but against which no action had been taken. Any service provider who willingly took steps to block access to sites on this second list would immune from prosecution under the bill.
Stop Online Piracy Act (SOPA)
The Stop Online Piracy Act (SOPA), also known as H.R. 3261, is a bill that was introduced in the United States House of Representatives on October 26, 2011, by Representative Lamar Smith (R-TX) and a bipartisan group of 12 initial co-sponsors. The originally proposed bill would allow the U.S. Department of Justice, as well as copyright holders, to seek court orders against websites accused of enabling or facilitating copyright infringement. Depending on who requests the court orders, the actions could include barring online advertising networks and payment facilitators such as PayPal from doing business with the allegedly infringing website, barring search engines from linking to such sites, and requiring Internet service providers to block access to such sites. Many have argued that since ISP's would be required to block access to certain websites that this is censorship. On January 18, 2012, the English Wikipedia shut down for 24 hours beginning at 5:00 UTC (12:00 EST) to protest SOPA and PIPA. In the wake of this and many other online protests, Rep. Smith stated, "The House Judiciary Committee will postpone consideration of the legislation until there is wider agreement on a solution".
Senator Ron Wyden, an Oregon Democrat and a key opponent of the bills, said lawmakers had collected more than 14 million names — more than 10 million of them voters — who contacted them to protest the once-obscure legislation.
Protect Intellectual Property Act (PIPA)
The Protect Intellectual Property Act (Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act, or PIPA) was a proposed law with the stated goal of giving the US government and copyright holders additional tools to curb access to "rogue websites dedicated to infringing or counterfeit goods", especially those registered outside the U.S. The bill was introduced on May 12, 2011, by Senator Patrick Leahy (D-VT) and 11 bipartisan co-sponsors. PIPA is a re-write of the Combating Online Infringement and Counterfeits Act (COICA), which failed to pass in 2010. In the wake of online protests held on January 18, 2012, Senate Majority Leader Harry Reid announced on Friday January 20 that a vote on the bill would be postponed until issues raised about the bill were resolved. Reid urged Leahy, the chief sponsor of PIPA, to "continue engaging with all stakeholders to forge a balance between protecting Americans' intellectual property, and maintaining openness and innovation on the internet."
Cyber Intelligence Sharing and Protection Act (CISPA)
The Cyber Intelligence Sharing and Protection Act (CISPA) was a proposed law introduced in November 2011, with the stated goal of giving the U.S. government additional options and resources to ensure the security of networks against attacks. It was passed by the U.S. House of Representatives in April 2012, but was not passed by the U.S. Senate. The bill was reintroduced in the House in February 2013 and again in January 2015. While this bill never became law, a similar bill from the U.S. Senate, the Cybersecurity Information Sharing Act (CISA), was incorporated by amendment into a consolidated spending bill in the U.S. House on December 15, 2015, and was signed into law by President Barack Obama on December 18, 2015.
CISPA was supported by several trade groups containing more than eight hundred private companies, including the Business Software Alliance, CTIA – The Wireless Association, Information Technology Industry Council, Internet Security Alliance, National Cable & Telecommunications Association, National Defense Industrial Association, TechAmerica and United States Chamber of Commerce, in addition to individual major telecommunications and information technology companies like AT&T, Facebook, IBM, Intel, Oracle Corporation, Symantec, and Verizon.
Reporters Without Borders expressed concern that in the name of the war on cyber crime, it would allow the government and private companies to deploy draconian measures to monitor, even censor, the Web. Other organizations that oppose the bill include the Constitution Project, American Civil Liberties Union, Electronic Frontier Foundation, Center for Democracy and Technology, Fight for the Future, Free Press, Sunlight Foundation, and TechFreedom. Google did not take a public position on the bill, but lobbied for it.
State laws
In November 2016 the National Conference of State Legislatures listed twenty-seven states with laws that apply to Internet use at publicly funded schools or libraries:
The majority of these states simply require school boards/districts or public libraries to adopt Internet use policies to prevent minors from gaining access to sexually explicit, obscene or harmful materials. However, some states also require publicly funded institutions to install filtering software on library terminals or school computers.
The states that require schools and/or libraries to adopt policies to protect minors include: California, Delaware, Georgia, Indiana, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, New Hampshire, New York, Rhode Island, South Carolina, and Tennessee. Florida law "encourages public libraries to adopt an Internet safety education program, including the implementation of a computer-based educational program."
The states that require Internet filtering in schools and/or libraries to protect minors are: Arizona, Arkansas, Colorado, Idaho, Kansas, Michigan, Minnesota, Missouri, Ohio, Pennsylvania, South Dakota, Utah, and Virginia.
And five states require Internet service providers to make a product or service available to subscribers to control use of the Internet. They are: Louisiana, Maryland, Nevada, Texas, and Utah.
In July 2011 Missouri lawmakers passed the Amy Hestir Student Protection Act which included a provision that barred K-12 teachers from using websites that allow "exclusive access" in communications with current students or former students who are 18 or younger, such as occurs with private messages on sites such as Facebook. A circuit court order issued before the law went into effect blocked the provision because "the breadth of the prohibition is staggering" and the law "would have a chilling effect" on free-speech rights guaranteed under the US Constitution. In September the legislature replaced the controversial provision with a requirement that local school districts develop their own policies on the use of electronic communication between employees and students.
In December 2016, Bill Chumley, member of the South Carolina House of Representatives, introduced a bill that would require all computers to be sold with "digital blocking capabilities" to restrict access to pornographic materials. Users or manufacturers would be required to pay a $20 fee in order to lift the blocks. As of April 2018 the bill had not become law, but remained pending before the House Committee on Judiciary.
In March 2018, Frank Ciccone and Hanna Gallo, members of the Rhode Island State Senate, introduced a bill requiring Internet Service Providers to institute a block on pornographic materials, which could be lifted with the payment of a $20 fee.
Censorship by institutions
The constitutional and other legal protections that prohibit or limit government censorship of the Internet do not generally apply to private corporations. Corporations may voluntarily choose to limit the content they make available or allow others to make available on the Internet. Corporations may be encouraged by government pressure or required by law or court order to remove or limit Internet access to content that is judged to be obscene (including child pornography), harmful to children, defamatory, pose a threat to national security, promote illegal activities such as gambling, prostitution, theft of intellectual property, hate speech, and inciting violence.
Public and private institutions that provide Internet access for their employees, customers, students, or members will sometimes limit this access in an attempt to ensure it is used only for the purposes of the organization. This can include content-control software to limit access to entertainment content in business and educational settings and limiting high-bandwidth services in settings where bandwidth is at a premium. Some institutions also block outside e-mail services as a precaution, usually initiated out of concerns for local network security or concerns that e-mail might be used intentionally or unintentionally to allow trade secrets or other confidential information to escape.
Schools and libraries
K-12 schools and libraries that accept funds from the federal E-rate program or Library Services and Technology Act grants for Internet access or internal connections are required by Children's Internet Protection Act to have an "Internet safety policy and technology protection measures in place".
Many K-12 school districts in the United States use Internet filters to block material deemed inappropriate for the school setting. The federal government leaves decisions about what to filter or block to local authorities. However, many question this approach, feeling that such decisions should be made by a student's parents or guardian. Some of the fears associated with Internet filtering in schools include: the risk of supporting a predominant ideology, that views held by filter manufacturers are being imposed on students, over blocking of useful information, and under blocking of harmful information. A 2003 study "found that blocking software overblocked state-mandated curriculum topics extensively–for every web page correctly blocked as advertised, one or more was blocked incorrectly."
Some libraries may also block access to certain web pages, including pornography, advertising, chat, gaming, social networking, and online forum sites, but there is a long and important tradition among librarians against censorship and the use of filtering and blocking software in libraries remains very controversial.
Search engines and social media
In 2007, Verizon attempted to block the abortion rights group NARAL Pro-Choice America from using their text messaging services to speak to their supporters. Verizon claims it was in order to enforce a policy that doesn't allow their customers to use their service to communicate "controversial" or "unsavory" messages. Comcast, AT&T and many other ISPs have also been accused of regulating internet traffic and bandwidth.
eNom, a private domain name registrar and Web hosting company operating in the U.S., disables domain names which appear on a U.S. Treasury Department blocklist.
Military
The Department of Defense prohibits its personnel from accessing certain IP addresses from DoD computers. The US military's filtering policy is laid out in a report to Congress entitled "Department of Defense Personnel Access to the Internet".
In October 2009, military blogger C.J. Grisham was temporarily pressured by his superiors at Redstone Arsenal to close his blog, A Soldier's Perspective, after complaining about local public school officials pushing a mandatory school uniform program without parental consent.
The Monterey Herald reported on June 27, 2013 that the United States Army bars its personnel from accessing parts of The Guardian website after whistleblower Edward Snowden's revelations about the PRISM global surveillance program and the National Security Agency (NSA) were published there. The entire Guardian website is blocked for personnel stationed throughout Afghanistan, the Middle East, and South Asia, as well as personnel stationed at U.S. Central Command headquarters in Florida.
WikiLeaks
In February 2008, the Bank Julius Baer vs. WikiLeaks lawsuit prompted the United States District Court for the Northern District of California to issue a permanent injunction against the website WikiLeaks' domain name registrar. The result was that WikiLeaks could not be accessed through its web address. This elicited accusations of censorship and resulted in the Electronic Frontier Foundation stepping up to defend WikiLeaks. After a later hearing, the injunction was lifted.
In December 2010, the White House Office of Management and Budget, the U.S. Library of Congress, the U.S. Air Force, and other government agencies began advising their personnel not to read classified documents available from WikiLeaks and some blocked access to WikiLeaks and other news organizations' websites. This action was intended to reduce the exposure of personnel to classified information released by WikiLeaks and published by those news organizations.
On December 1, 2010 Amazon.com cut off WikiLeaks 24 hours after being contacted by the staff of Joe Lieberman, Chairman of the U.S. Senate Committee on Homeland Security. In a statement Lieberman said:
[Amazon's] decision to cut off WikiLeaks now is the right decision and should set the standard for other companies WikiLeaks is using to distribute its illegally seized material. I call on any other company or organization that is hosting WikiLeaks to immediately terminate its relationship with them.
Constitutional lawyers say that this is not a first amendment issue because Amazon, as a private company, is free to make its own decisions. Kevin Bankston, a lawyer with the Electronic Frontier Foundation, agreed that this is not a violation of the first amendment, but said it was nevertheless disappointing. "This certainly implicates first amendment rights to the extent that web hosts may, based on direct or informal pressure, limit the materials the American public has a first amendment right to access".
The New York Times reported on 14 December that the U.S. Air Force bars its personnel from access to news sites (such as those of The New York Times and The Guardian, Le Monde, El País, and Der Spiegel) that publish leaked cables.
WikiLeaks faces a global financial blockade by major finance companies including Moneybookers, MasterCard, Visa, and PayPal. In October 2011 Julian Assange said the blockade had destroyed 95% of WikiLeaks' revenues and announced that it was suspending publishing operations in order to focus on fighting the blockade and raising new funds.
Individual websites
Some websites that allow user-contributed content practice self-censorship by adopting policies on how the web site may be used and by banning or requiring pre-approval of editorial contributions from users that do not follow the policies for the site. For example, social media websites may restrict hate speech to a larger degree than is required by US law, and may restrict harassment and verbal abuse.
Restriction of hate speech and harassment on social media is the subject of debate in the US. For example, two perspectives include that online hate speech should be removed because it causes serious intimidation and harm, and that it shouldn't be removed because it's "better to know that there are bigots among us" than to have an inaccurate picture of the world.
The National Religious Broadcasters, an organization that represents American Christian television and radio broadcasters, and the American Center for Law and Justice, a conservative Christian, pro-life group, conducted a study that concluded that some social media sites are "actively censoring" religious content that expresses Christian perspectives, because they forbid "hate speech" in the form of anti-homosexual viewpoints.
By corporations abroad
Several U.S. corporations including Google, Yahoo!, Microsoft, and MySpace practice greater levels of self-censorship in some international versions of their online services. This is most notably the case in these corporations' dealings in China.
In October 2011 US-based Blue Coat Systems of Sunnyvale, California acknowledged that Syria is using its devices to censor Web activity, a possible violation of US trade embargoes.
Trade secrets and copyright
A January 4, 2007 restraining order issued by U.S. District Court Judge Jack B. Weinstein forbade a large number of activists in the psychiatric survivors movement from posting links on their websites to ostensibly leaked documents which purportedly show that Eli Lilly and Company intentionally withheld information as to the lethal side-effects of Zyprexa. The Electronic Frontier Foundation appealed this as prior restraint on the right to link to and post documents, saying that citizen-journalists should have the same First Amendment rights as major media outlets. It was later held that the judgment was unenforceable, though First Amendment claims were rejected.
In May 2011 and January 2012 the US seized the domains of the non-US websites of the non-US citizens Richard O'Dwyer and Kim Dotcom, and sought to extradite them to the US, accusing them of copyright infringement.
In January 2015 details from the Sony Pictures Entertainment hack revealed the Motion Picture Association of America's lobbying of the United States International Trade Commission to mandate that US ISPs either at the internet transit or internet service provider level, implement IP address blocking of unauthorized file sharing as well as linking websites.
Bay Area Rapid Transit (BART) cell phone service suspension
On July 3, 2011, two officers of the Bay Area Rapid Transit (BART) Police shot and killed Charles Hill at Civic Center Station in San Francisco. On August 12, 2011, BART shut down cell phone services, including mobile Internet access, for three hours in an effort to limit possible protests against the shooting and to keep communications away from protesters at the Civic Center station in San Francisco. The shutdown caught the attention of international media, as well as drawing comparisons to the former Egyptian president Hosni Mubarak in several articles and comments.
On August 29, 2011, a coalition of nine public interest groups led by Public Knowledge filed an Emergency Petition asking the U.S. Federal Communications Commission (FCC) to declare "that the actions taken by the Bay Area Rapid Transit District ("BART") on August 11, 2011 violated the Communications Act of 1934,
as amended, when it deliberately interfered with access to Commercial Mobile Radio Service
("CMRS") by the public" and "that local law enforcement has no authority to suspend or deny CMRS, or to order CMRS providers to suspend or deny service, absent a properly obtained order from the Commission, a state commission of appropriate jurisdiction, or a court of law with appropriate jurisdiction".
In December 2011 BART adopted a new "Cell Service Interruption Policy" that only allows shutdowns of cell phone services within BART facilities "in the most extraordinary circumstances that threaten the safety of District passengers, employees and other members of public, the destruction of District property, or the substantial
disruption of public transit service." According to a spokesperson for BART, under the new policy the wireless phone system would not be turned off under circumstances similar to those in August 2011. Instead police officers would arrest individuals who break the law.
Interruption of communication services
In March 2012 the FCC requested public comment on the question of whether or when the police and other government officials can intentionally interrupt cellphone and Internet service to protect public safety. In response, through the end of May 2012, the FCC received 137 comments and 9 reply comments. As of July 2013 the proceeding remained open, but the FCC had taken no further action.
In December 2014 the FCC issued an Enforcement Advisory that warns the public "that it is illegal to use a cell phone jammer or any other type of device that blocks, jams or interferes with authorized communications" and that "this prohibition extends to every entity that does not hold a federal authorization, including state and local law enforcement agencies". While jamming was not used by BART to disable cell phones, the legal and regulatory considerations are similar.
In December 2016 the California Law Revision Commission issued a recommendation on "Government Interruption of Communication Service". The Commission concluded that government action to interrupt communications can be constitutional in some circumstances, if the government acts
pursuant to procedures that are properly designed to protect constitutional free expression and due process rights. To be constitutional the action will usually need to be approved by a judicial officer who has found (i) probable cause that the communication service is or will be used for an unlawful purpose, (ii) that immediate action is required to protect public health, safety, or welfare and (iii) the affected customer must have a prompt opportunity for adjudication of the government's contentions. For a general interruption of communication service that will affect a large number of people or a large geographic area, judicial approval would also require that the action (iv) is necessary to avoid a serious threat of violence that is both imminent and likely to occur or (v) that the effect on expression is incidental to some other valid government purpose, and (vi) is reasonable, (vii) is content-neutral, (viii) would impair no more speech than is necessary, and (ix) leaves open other ample means of communication. Prior judicial approval is not required in extreme emergencies involving immediate danger of death or great bodily injury where there is insufficient time to obtain a court order.
Beyond constitutional law, a state or local government's ability to effect a general interruption of wireless communication service is also subject to the federal "Emergency Wireless Protocol (EWP)" or "Standard Operating Procedure 303" which established a process for interrupting and restoring wireless communication service during times of national emergency.The effect of this protocol is that state and local government officials can initiate an interruption of communication service, but they cannot directly order wireless communication service providers to take action. Such orders to private wireless communication providers must come from the National Coordinating Center for Communications (NCC) within the Department of Homeland Security (DHS), the federal officials designated by the EWP. If an order authorizing an interruption does not fall within the EWP, it is served directly on the relevant communication service provider.
See also
Internet censorship and surveillance by country
Communications Assistance for Law Enforcement Act (CALEA)
Mass surveillance in the United States
References
This article incorporates licensed material from the Regional Overviews and other sections of the OpenNet Initiative web site.
External links
Global Integrity: Internet Censorship, A Comparative Study; puts US online censorship in cross-country context.
United States
United States |
4291595 | https://en.wikipedia.org/wiki/Ancillary%20data | Ancillary data | Ancillary data is data that has been added to given data and uses the same form of transport. Common examples are cover art images for media files or streams, or digital data added to radio or television broadcasts.
Television
Ancillary data (commonly abbreviated as ANC data), in the context of television systems, refers to a means which by non-video information (such as audio, other forms of essence, and metadata) may be embedded within the serial digital interface. Ancillary data is standardized by SMPTE as SMPTE 291M: Ancillary Data Packet and Space Formatting.
Ancillary data can be located in non-picture portions of horizontal scan lines. This is known as horizontal ancillary data (HANC). Ancillary data can also be located in non-picture regions of the frame, This is known as vertical ancillary data (VANC).
Technical details
Location
Ancillary data packets may be located anywhere within a serial digital data stream, with the following exceptions:
They should not be located in the lines identified as a switch point (which may be lost when switching sources).
They should not be located in the active picture area.
They may not cross the TRS (timing reference signal) packets.
Ancillary data packets are commonly divided into two types, depending on where they are located—specific packet types are often constrained to be in one location or another.
Ancillary packets located in the horizontal blanking region (after EAV but before SAV), regardless of line, are known as horizontal ancillary data, or HANC. HANC is commonly used for higher-bandwidth data, and/or for things that need to be synchronized to a particular line; the most common type of HANC is embedded audio.
Ancillary packets located in the vertical blanking region, and after SAV but before EAV, are known as vertical ancillary data, or VANC. VANC is commonly used for low-bandwidth data, or for things that only need be updated on a per-field or per-frame rate. Closed caption data and VPID are generally stored as VANC.
Note that ANC packets which lie in the dataspace which is in both the horizontal and vertical intervals, is considered to be HANC and not VANC.
VANC packets should be inserted in this manner:
(SMPTE 334M section 3): VANC data packets can appear anywhere between the SAV and EAV TRS packets in any line from the second line after the line specified for switching to the last line preceding active video, inclusive. Given the spec for switch points (set RP168 figure 2), the first allowed lines are 12 and 275 (for 525-line/59.94 Hz systems) or 8 and 321 (for 625-line/50 Hz systems). This conflicts with SMPTE 125M, and does not address requirements for carrying DVITC (Digital Vertical Interval TimeCode) and video index packets.
(SMPTE 125M section 3.6.2): VANC should appear only in lines 1-13, 15-19, 264-276, and 278-282, with lines 14 and 277 reserved for DVITC and video index data. This conflicts with SMPTE 334M, and does not address 625-line/50 Hz systems.
Packet format
All ANC packets must start with a start sequence; for component interfaces (the only kind of serial digital interface in widespread use today), the start sequence is 0x000 0x3FF 0x3FF. This sequence is otherwise illegal in the serial digital interface. (In the obsolete composite versions of SDI, the ANC start sequence is a single word, 0x3FC).
Three words immediately follow the start sequence in the header. The first word after the start sequence is the Data Identifier or DID, followed by either a 'Secondary Data Identifier (SDID) or a Data Block Number (DBN), followed by a Data Count (DC). After the Data Count word are 0 - 255 (inclusive) User Data Words (UDW), followed by a Checksum (CS) word.
DID
The Data Identifier word (along with the SDID, if used), indicates the type of ancillary data that the packet corresponds to. Data identifiers range from 1 to 255 (FF hex), with 0 being reserved. As the serial digital interface is a 10-bit format, the DID word is encoded as follows:
Bits 0-7 (bit 0 being the LSB), are the raw DID value.
Bit 8 is the even parity bit of bits 0-7.
Bit 9 is the inverse of bit 8.
Thus, a DID of 0x61 (01100001) would be encoded as 0x161 (0101100001), whereas a DID of 0x63 (01100011) would be encoded as 0x263 (1001100011). Note that this encoding scheme ensures that the reserved values in the serial digital interface (0-3 and 1020-1023) are never used.
If the DID is equal to 128 (0x80) or greater, then the packet is a Type 1 packet, and the DID is sufficient to identify the packet type, and the following word is a Data Block Number. If the DID is less than 128, it is a Type 2 packet, and the following words is the Secondary Data Identifier; the DID and SDID together identify the packet type.
SDID
The SDID is only valid if the DID is less than 0x80. The SDID is nominally an 8-bit value, ranging from 0 to 255. It is encoded in the same fashion as the DID.
DID/SDID words of 161 101 (hex) correspond to a DID of 61 hex, and a SDID of 1 (once the two high bits are removed); these values would indicate that the packet type is defined by SMPTE 334M, and contains DTV closed captions data.
DBN
The DBN is only valid if the DID is 80 hex or greater. It is (optionally) used to identify multiple packets of the same type within a field; each subsequent packet of the indicated type has a DBN which is one higher than the previous packet, wrapping around as necessary. The DBN is an 8-bit value, encoded in the same fashion as the SDID.
DC
The Data Count word is an 8-bit value, encoded in the same fashion as the DID, which indicates how many user data words are to follow. It can range from 0 to 255.
UDW
User data words are the "payload" present in the ANC packet. They are defined according to the packet type, SMPTE 291M does not define their use or impose any restrictions on the values which may be present in the UDW space. The only restriction is that the reserved values in the serial digital interface (0-3 and 1020-1023) may not be included in the UDW. Many ANC formats, though not all, are essentially 8-bit formats, and encode data in the same manner that the header words are encoded.
Example
SMPTE 352M (Video Payload ID) defines four UDW:
Checksum
The last word in an ANC packet is the Checksum word. It is computed by computing the sum (modulo 512) of bits 0-8 (not bit 9), of all the other words in the ANC packet, excluding the packet start sequence. Bit 9 of the checksum word is then defined as the inverse of bit 8. Note that the checksum word does not contain a parity bit; instead, the parity bits of other words are included in the checksum calculations.
Usage
Embedded audio
Embedded audio is audio payload which is (typically) the soundtrack (music, dialogue, and sound effects) for the video program. Two standards, SMPTE 272M (for SD) and SMPTE 299M (for HD and 3G) define how audio is embedded into the ancillary space. The SD and HD standards provide for up to 16 channels of PCM audio, while 3G allows up to 32 channels, typically encoded in the AES3 format. In HD, the embedded audio data packets are carried in the HANC space of Cb/Cr (chroma) parallel data stream.
In addition, both standards define audio control packets. The audio control packets are carried in the HANC space of the Y (luminance) parallel data steam and are inserted once per field at the second video line past the switching point (see SMPTE RP168 for switching points of various video standards). The audio control packet contains audio-related metadata, such as its timing relative to video, which channels are present, etc.
Embedded audio packets are Type 1 packets.
EDH
EDH packets are used for error detection in standard definition interfaces (they are not necessary in HD interfaces, as the HD-SDI interface includes CRC checkwords built in).
External links
SMPTE: SMPTE 291M-1998: Ancillary Data Packet and Space Formatting
SMPTE: ANSI/SMPTE 125M-1995: Component Video Signal 4:2:2; Bit-Parallel Digital Interface
SMPTE: ANSI/SMPTE 334M-1995: Vertical Ancillary Data Mapping for Bit-Serial Interface
SMPTE: RP168-2002: Definition of Vertical Interval Switching Point for Synchronous Video Switching
SMPTE: SMPTE 299-1:2010: 24-Bit Digital Audio Format for SMPTE 292 Bit-Serial Interface
SMPTE: SMPTE 299-2:2010: Extension of the 24-Bit Digital Audio Format to 32 Channels for 3 Gb/s Bit-Serial Interfaces
SMPTE: Data Identification Word Assignments for Registered DIDs
Film and video technology
Serial digital interface
SMPTE standards |
1276699 | https://en.wikipedia.org/wiki/List%20of%20network%20protocols%20%28OSI%20model%29 | List of network protocols (OSI model) | This article lists protocols, categorized by the nearest layer in the Open Systems Interconnection model. This list is not exclusive to only the OSI protocol family. Many of these protocols are originally based on the Internet Protocol Suite (TCP/IP) and other models and they often do not fit neatly into OSI layers.
Layer 1 (Physical Layer)
Telephone network modems
IrDA physical layer
USB physical layer
EIA RS-232, EIA-422, EIA-423, RS-449, RS-485
Ethernet physical layer 10BASE-T, 10BASE2, 10BASE5, 100BASE-TX, 100BASE-FX, 1000BASE-T, 1000BASE-SX and other varieties
Varieties of 802.11 Wi-Fi physical layers
DSL
ISDN
T1 and other T-carrier links, and E1 and other E-carrier links
ITU Recommendations: see ITU-T
IEEE 1394 interfaces
TransferJet
Etherloop
ARINC 818 Avionics Digital Video Bus
G.hn/G.9960 physical layer
CAN bus (controller area network) physical layer
Mobile Industry Processor Interface physical layer
Infrared
Frame Relay
FO Fiber optics
X.25
Layer 2 (Data Link Layer)
ARCnet Attached Resource Computer NETwork
ARP Address Resolution Protocol
ATM Asynchronous Transfer Mode
CHAP Challenge Handshake Authentication Protocol
CDP Cisco Discovery Protocol
DCAP Data Link Switching Client Access Protocol
Distributed Multi-Link Trunking
Distributed Split Multi-Link Trunking
DTP Dynamic Trunking Protocol
Econet
Ethernet
FDDI Fiber Distributed Data Interface
Frame Relay
ITU-T G.hn Data Link Layer
HDLC High-Level Data Link Control
IEEE 802.11 WiFi
IEEE 802.16 WiMAX
LACP Link Aggregation Control Protocol
LattisNet
LocalTalk
L2F Layer 2 Forwarding Protocol
L2TP Layer 2 Tunneling Protocol
LLDP Link Layer Discovery Protocol
LLDP-MED Link Layer Discovery Protocol - Media Endpoint Discovery
MAC Media Access Control
Q.710 Simplified Message Transfer Part
Multi-link trunking Protocol
NDP Neighbor Discovery Protocol
PAgP - Cisco Systems proprietary link aggregation protocol
PPP Point-to-Point Protocol
PPTP Point-to-Point Tunneling Protocol
PAP Password Authentication Protocol
RPR IEEE 802.17 Resilient Packet Ring
SLIP Serial Line Internet Protocol (obsolete)
StarLAN
Space Data Link Protocol, one of the norms for Space Data Link from the Consultative Committee for Space Data Systems
STP Spanning Tree Protocol
Split multi-link trunking Protocol
Token Ring a protocol developed by IBM; the name can also be used to describe the token passing ring logical topology that it popularized.
Virtual Extended Network (VEN) a protocol developed by iQuila.
VTP VLAN Trunking Protocol
VLAN Virtual Local Area Network
Network Topology
Asynchronous Transfer Mode (ATM)
IS-IS, Intermediate System - Intermediate System (OSI)
SPB Shortest Path Bridging
MTP Message Transfer Part
NSP Network Service Part
TRILL (TRansparent Interconnection of Lots of Links)
Layer 2.5
ARP Address Resolution Protocol
MPLS Multiprotocol Label Switching
PPPoE Point-to-Point Protocol over Ethernet
TIPC Transparent Inter-process Communication
Layer 3 (Network Layer)
CLNP Connectionless Networking Protocol
IPX Internetwork Packet Exchange
NAT Network Address Translation
Routed-SMLT
SCCP Signalling Connection Control Part
AppleTalk DDP
HSRP Hot Standby Router protocol
VRRP Virtual Router Redundancy Protocol
IP Internet Protocol
ICMP Internet Control Message Protocol
ARP Address Resolution Protocol
RIP Routing Information Protocol (v1 and v2)
OSPF Open Shortest Path First (v1 and v2)
IPSEC IPsec
Layer 3+4 (Protocol Suites)
AppleTalk
DECnet
IPX/SPX
Internet Protocol Suite
Xerox Network Systems
Layer 4 (Transport Layer)
AEP AppleTalk Echo Protocol
AH Authentication Header over IP or IPSec
DCCP Datagram Congestion Control Protocol
ESP Encapsulating Security Payload over IP or IPSec
FCP Fibre Channel Protocol
NetBIOS NetBIOS, File Sharing and Name Resolution
IL Originally developed as transport layer for 9P
iSCSI Internet Small Computer System Interface
NBF NetBIOS Frames protocol
SCTP Stream Control Transmission Protocol
Sinec H1 for telecontrol
TUP, Telephone User Part
SPX Sequenced Packet Exchange
NBP Name Binding Protocol {for AppleTalk}
TCP Transmission Control Protocol
UDP User Datagram Protocol
QUIC
Layer 5 (Session Layer)
This layer, presentation Layer and application layer are combined in TCP/IP model.
9P Distributed file system protocol developed originally as part of Plan 9
ADSP AppleTalk Data Stream Protocol
ASP AppleTalk Session Protocol
H.245 Call Control Protocol for Multimedia Communications
iSNS Internet Storage Name Service
NetBIOS, File Sharing and Name Resolution protocol - the basis of file sharing with Windows.
NetBEUI, NetBIOS Enhanced User Interface
NCP NetWare Core Protocol
PAP Printer Access Protocol
RPC Remote Procedure Call
RTCP RTP Control Protocol
SDP Sockets Direct Protocol
SMB Server Message Block
SMPP Short Message Peer-to-Peer
SOCKS "SOCKetS"
ZIP Zone Information Protocol {For AppleTalk}
This layer provides session management capabilities between hosts. For example, if some host needs a password verification for access and if credentials are provided then for that session password verification does not happen again. This layer can assist in synchronization, dialog control and critical operation management (e.g., an online bank transaction).
Layer 6 (Presentation Layer)
TLS Transport Layer Security
AFP Apple Filing Protocol
Independent Computing Architecture (ICA), the Citrix system core protocol
Lightweight Presentation Protocol (LPP)
NetWare Core Protocol (NCP)
Network Data Representation (NDR)
Tox, The Tox protocol is sometimes regarded as part of both the presentation and application layer
eXternal Data Representation (XDR)
X.25 Packet Assembler/Disassembler Protocol (PAD)
Layer 7 (Application Layer)
SOAP, Simple Object Access Protocol
Simple Service Discovery Protocol, A discovery protocol employed by UPnP
TCAP, Transaction Capabilities Application Part
Universal Plug and Play
DHCP
DNS Domain Name System
BOOTP Bootstrap Protocol
HTTP
HTTPS
NFS
POP3
SMTP
SNMP
FTP
NTP
IRC
Telnet
SSH
TFTP
IMAP
Gemini
Other protocols
Controller Area Network
Protocol description languages
Abstract Syntax Notation One (ASN.1)
See also
List of automation protocols
Systems Network Architecture (SNA) developed by IBM
Distributed Systems Architecture (DSA) developed by Honeywell-Bull
Distributed System Security Architecture (DSSA)
OSI Model
Further reading
External links
Protocol Encapsulation Chart - A PDF file illustrating the relationship between common protocols and the OSI Reference Model.
Network Protocols Acronyms and Abbreviations - list of network protocols with abbreviations order by index.
Network protocols
OSI |
3420349 | https://en.wikipedia.org/wiki/Users%27%20group | Users' group | A users' group (also user's group or user group) is a type of club focused on the use of a particular technology, usually (but not always) computer-related.
Overview
Users' groups started in the early days of mainframe computers, as a way to share sometimes hard-won knowledge and useful software, usually written by end users independently of the vendor-supplied programming efforts. SHARE, a user group originated by aerospace industry corporate users of IBM mainframe computers, was founded in 1955 and is the oldest computer user group still active. DECUS, the DEC User's Society, was founded in 1961 and its descendant organization, Connect Worldwide, still operates. The Computer Measurement Group (CMG) was founded in 1974 by systems professionals with a common interest in (mainframe) capacity management, and continues today with a much broader mission. The first UNIX users' group organized in 1978.
Users' groups began to proliferate with the microcomputer revolution of the late 1970s and early 1980s as hobbyists united to help each other with programming and configuration and use of hardware and software. Especially prior to the emergence of the World Wide Web, obtaining technical assistance with computers was often onerous, while computer clubs would gladly provide free technical support. Users' groups today continue to provide "real life" opportunities for learning from the shared experience of the members and may provide other functions such as a newsletter, group purchasing opportunities, tours of facilities, or speakers at group meetings.
A users' group may provide its members (and sometimes the general public as well) with one or more of the following services:
periodic meetings
annual or less frequent users conferences
public lectures
a newsletter
a library of media or tools
a software archive
an online presence such as a dial-up BBS or Internet website
swap meets
technical support
social events
Code Camp
Users' groups may be organized around a particular brand of current hardware (e.g. IBM, Macintosh, AMD), or current software and operating systems (e.g. Linux, Microsoft Windows, macOS), or more rarely may be dedicated to obsolescent, retro systems or historical computers (e.g. Apple II, PDP-11, Osborne). An example of an early user group is the Apple User Group Connection.
Computer user group
A computer user group (also known as a computer club) is a group of people who enjoy using microcomputers or personal computers and who meet regularly to discuss the use of computers, share knowledge and experience, hear from representatives of hardware manufacturers and software publishers, and hold other related activities. They may host special interest workgroups, often focusing on one particular aspect of computing.
Computer user groups meet both virtually and in hackerspaces. Computer user groups may consist of members who primarily use a specific operating system, such as Linux. While many hackers use free and open source software, others use Macintosh, RISC OS, Windows and Amiga OS. There are also other user groups that concentrate on either Mac OS (Macintosh User Group or MUG) or Linux (Linux User Group or LUG).
Many computer user groups belong to an umbrella organization, the Association of Personal Computer User Groups or APCUG.
See also
Hobby
List of users' groups
References
Thibodeau, Patrick. "Share Looks Back at 50 Years, Continues to Evolve." Computerworld, 7 Mar. 2005. Web. 21 Apr. 2015.
User groups |
2085170 | https://en.wikipedia.org/wiki/Diffusing%20update%20algorithm | Diffusing update algorithm | The diffusing update algorithm (DUAL) is the algorithm used by Cisco's EIGRP routing protocol to ensure that a given route is recalculated globally whenever it might cause a routing loop. It was developed by J.J. Garcia-Luna-Aceves at SRI International. The full name of the algorithm is DUAL finite-state machine (DUAL FSM). EIGRP is responsible for the routing within an autonomous system, and DUAL responds to changes in the routing topology and dynamically adjusts the routing tables of the router automatically.
EIGRP uses a feasibility condition to ensure that only loop-free routes are ever selected. The feasibility condition is conservative: when the condition is true, no loops can occur, but the condition might under some circumstances reject all routes to a destination although some are loop-free.
When no feasible route to a destination is available, the DUAL algorithm invokes a diffusing computation to ensure that all traces of the problematic route are eliminated from the network. At which point the normal Bellman–Ford algorithm is used to recover a new route.
Operation
DUAL uses three separate tables for the route calculation. These tables are created using information exchanged between the EIGRP routers. The information is different than that exchanged by link-state routing protocols. In EIGRP, the information exchanged includes the routes, the "metric" or cost of each route, and the information required to form a neighbor relationship (such as AS number, timers, and K values). The three tables and their functions in detail are as follows:
Neighbor table contains information on all other directly connected routers. A separate table exists for each supported protocol (IP, IPX, etc.). Each entry corresponds to a neighbour with the description of network interface and address. In addition, a timer is initialized to trigger the periodic detection of whether the connection is alive. This is achieved through "Hello" packets. If a "Hello" packet is not received from a neighbor for a specified time period, the router is assumed down and removed from the neighbor table.
Topology table contains the metric (cost information) of all routes to any destination within the autonomous system. This information is received from neighboring routers contained in the Neighbor table. The primary (successor) and secondary (feasible successor) routes to a destination will be determined with the information in the topology table. Among other things, each entry in the topology table contains the following:
"FD (Feasible Distance)": The calculated metric of a route to a destination within the autonomous system.
"RD (Reported Distance)": The metric to a destination as advertised by a neighboring router. RD is used to calculate the FD, and to determine if the route meets the "feasibility condition".
Route Status: A route is marked either "active" or "passive". "Passive" routes are stable and can be used for data transmission. "Active" routes are being recalculated, and/or not available.
Routing table contains the best route(s) to a destination (in terms of the lowest "metric"). These routes are the successors from the topology table.
DUAL evaluates the data received from other routers in the topology table and calculates the primary (successor) and secondary (feasible successor) routes. The primary path is usually the path with the lowest metric to reach the destination, and the redundant path is the path with the second lowest cost (if it meets the feasibility condition). There may be multiple successors and multiple feasible successors. Both successors and feasible successors are maintained in the topology table, but only the successors are added to the routing table and used to route packets.
For a route to become a feasible successor, its RD must be smaller than the FD of the successor. If this feasibility condition is met, there is no way that adding this route to the routing table could cause a loop.
If all the successor routes to a destination fail, the feasible successor becomes the successor and is immediately added to the routing table. If there is no feasible successor in the topology table, a query process is initiated to look for a new route.
Example
Legend:
+ = Router
− or | = Link
(X) = Metric of link
A (2) B (1) C
+ - - - - - + - - - - - +
| |
(2)| | (3)
| |
+ - - - - - +
D (1) E
Now a client on router E wants to talk to a client on router A. That means a route between router A and router E must be available. This route is calculated as follows:
The immediate neighbours of router E are router C and router D. DUAL in router E asks for the reported distance (RD) from routers C and D respectively to router A. The following are the results:
Destination: Router A
via D: RD(4)
via C: RD(3)
The route via C is therefore in the lowest cost. In the next step, the distance from router E to the neighbours are added to the reported distance to get the feasible distance (FD):
Destination: Router A
via D: RD(4), FD(5)
via C: RD(3), FD(6)
DUAL therefore finds that the route via D has the least total cost. Then the route via D will be marked as "successor", equipped with passive status and registered in the routing table. The route via C is kept as a "feasible successor", because its RD is less than the FD of the successor:
Destination: Router A
via D: RD(4), FD(5) successor
via C: RD(3), FD(6) feasible successor
References
Routing protocols
Routing algorithms
SRI International software |
6967890 | https://en.wikipedia.org/wiki/Timeline%20of%20virtualization%20development | Timeline of virtualization development | The following is a timeline of virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.
Timelines
Note: This timeline is missing data for important historical systems, including: Atlas Computer (Manchester), GE 645, Burroughs B5000
1964
IBM Cambridge Scientific Center begins development of CP-40.
1965
IBM M44/44X, experimental paging system, in use at Thomas J. Watson Research Center.
IBM announces the IBM System/360-67, a 32-bit CPU with virtual memory hardware (August 1965).
1966
IBM ships the S/360-67 computer in June 1966
IBM begins work on CP-67, a reimplementation of CP-40 for the S/360-67.
1967
CP-40 (January) and CP-67 (April) go into production time-sharing use.
1968
CP/CMS installed at eight initial customer sites.
CP/CMS submitted to IBM Type-III Library by MIT's Lincoln Laboratory, making system available to all IBM S/360 customers at no charge in source code form.
Resale of CP/CMS access begins at time-sharing vendor National CSS (becoming a distinct version, eventually renamed VP/CSS).
1970
IBM System/370 announced (June) – without virtual memory.
Work begins on CP-370, a complete reimplementation of CP-67, for use on the System/370 series.
1971
First System/370 shipped: S/370-155 (January).
1972
Announcement of virtual memory added to System/370 series.
VM/370 announced – and running on announcement date. VM/370 includes the ability to run VM under VM (previously implemented both at IBM and at user sites under CP/CMS, but not made part of standard releases).
1973
First shipment of announced virtual memory S/370 models (April: -158, May: -168).
1974–1998
[ongoing history of VM family and VP/CSS.]
1977
Initial commercial release of VAX/VMS, later renamed OpenVMS.
1985
October 9, 1985: Announcement of the Intel 80286-based AT&T 6300+ with Simultask, a virtual machine monitor developed by Locus Computing Corporation in collaboration with AT&T, that enabled the direct execution of an Intel 8086 guest operating system under a host Unix System V Release 2 OS. Although the product was marketed with Microsoft MS-DOS as the guest OS, in fact the Virtual Machine could support any realmode operating system or standalone program (such as Microsoft Flight Simulator) that was written using only valid 8086 instructions (not instructions introduced with the 80286). Locus subsequently developed this technology into their "Merge" product line.
1987
January 1987: A "product evaluation" version of Merge/386 from Locus Computing Corporation was made available to OEMs. Merge/386 made use of the Virtual 8086 mode provided by the Intel 80386 processor, and supported multiple simultaneous virtual 8086 machines. The virtual machines supported unmodified guest operating systems and standalone programs such as Microsoft Flight Simulator; but in typical usage the guest was MS-DOS with a Locus proprietary redirector (also marketed for networked PCs as "PC-Interface") and a "network" driver that provided communication with a regular user-mode file server process running under the host operating system on the same machine.
October 1987: Retail Version 1.0 of Merge/386 began shipping, offered with Microport Unix System V Release 3.
1988
SoftPC 1.0 for Sun was introduced in 1988 by Insignia Solutions
SoftPC appears in its first version for Apple Macintosh. These versions (Sun and Macintosh) only have support for DOS.
1991
IBM introduced OS/2 Virtual DOS machine (VDM) with support for x86 virtual 8086 mode, being capable of virtualize DOS/Windows and other 16 bits operating systems, like CP/M-86
1994
Kevin Lawton leaves MIT Lincoln Lab and start the Bochs project. Bochs was initially coded for x86 architecture, capable of emulating BIOS, processor and other x86-compatible hardware, by simple algorithms, isolated from the rest of the environment, eventually incorporating the ability to run different processor algorithms under x86-architecture or the host, including bios and core processor (Itanium x64, x86_64, arm, mips, powerpc, etc.), and with the advantage that the application is multi platform (BSD, Linux, Windows, Mac, Solaris).
1997
First version of Virtual PC for Macintosh platform was released in June 1997 by Connectix
1998
June 15, 1998, Simics/sun4m is presented at USENIX'98, demonstrating full system simulation booting Linux 2.0.30 and Solaris 2.6 unmodified from dd (Unix):ed disks.
October 26, 1998, VMware filed for a patent on their techniques, which was granted as U.S. Patent 6,397,242
1999
February 8, 1999, VMware introduced VMware Virtual Platform for the Intel IA-32 architecture.
2000
FreeBSD 4.0 is released, including initial implementation of FreeBSD jails
IBM announces z/VM, new version of VM for IBM's 64-bit z/Architecture
2001
January 31, 2001, AMD and Virtutech release Simics/x86-64 ("Virtuhammer") to support the new 64-bit architecture for x86. Virtuhammer is used to port Linux distributions and the Windows kernel to x86-64 well before the first x86-64 processor (Opteron) was available in April 2003.
June, Connectix launches its first version of Virtual PC for Windows.
July, VMware created the first x86 server virtualization product.
Egenera, Inc. launches their Processor Area Network (PAN Manager) software and BladeFrame chassis which provide hardware virtualization of processing blade's (pBlade) internal disk, network interface cards, and serial console.
2003
First release of first open-source x86 hypervisor, Xen
February 18, 2003, Microsoft acquired virtualization technologies (Virtual PC and unreleased product called "Virtual Server") from Connectix Corporation.
Late 2003, EMC acquired VMware for $635 million.
Late 2003, VERITAS acquired Ejascent for $59 million.
November 10, 2003 Microsoft releases Microsoft Virtual PC, which is machine-level virtualization technology, to ease the transition to Windows XP.
2005
HP releases Integrity Virtual Machines 1.0 and 1.2 which ran only HP-UX
October 24, 2005 VMware releases VMware Player, a free player for virtual machines, to the masses.
Sun releases Solaris (operating system) 10, including Solaris Zones, for both x86/x64 and SPARC systems
2006
July 12, 2006 VMware releases VMware Server, a free machine-level virtualization product for the server market.
Microsoft Virtual PC 2006 is released as a free program, also in July.
July 17, 2006 Microsoft bought Softricity.
August 16, 2006 VMware announces of the winners of the virtualization appliance contest.
September 26, 2006 moka5 delivers LivePC technology.
HP releases Integrity Virtual Machines Version 2.0, which supports Windows Server 2003, CD and DVD burners, tape drives and VLAN.
December 11, 2006 Virtual Iron releases Virtual Iron 3.1, a free bare metal virtualization product for enterprise server virtualization market.
2007
Open source kvm released which is integrated with linux kernel and provides virtualization on only linux system, it needs hardware support.
January 15, 2007 innoTek released VirtualBox Open Source Edition (OSE), the first professional PC virtualization solution released as open source under the GNU General Public License (GPL). It includes some code from the QEMU project.
Sun releases Solaris 8 Containers to enable migration of a Solaris 8 computer into a Solaris Container on a Solaris 10 system – for SPARC only
2008
January 15, 2008 VMware, Inc. announced it has entered into a definitive agreement to acquire Thinstall, a privately held application virtualization software company.
February 12, 2008 Sun Microsystems announced that it had entered into a stock purchase agreement to acquire innotek, makers of VirtualBox.
In April, VMware releases VMware Workstation 6.5 beta, the first program for Windows and Linux to enable DirectX 9 accelerated graphics on Windows XP guests .
Year 1960
In the mid-1960s, IBM's Cambridge Scientific Center developed CP-40, the first version of CP/CMS. It went into production use in January 1967. From its inception, CP-40 was intended to implement full virtualization. Doing so required hardware and microcode customization on a S/360-40, to provide the necessary address translation and other virtualization features. Experience on the CP-40 project provided input to the development of the IBM System/360-67, announced in 1965 (along with its ill-starred operating system, TSS/360). CP-40 was reimplemented for the S/360-67 as CP-67, and by April 1967, both versions were in daily production use. CP/CMS was made generally available to IBM customers in source code form, as part of the unsupported IBM Type-III Library, in 1968.
Year 1970
IBM announced the System/370 in 1970. To the disappointment of CP/CMS users – as with the System/360 announcement – the series would not include virtual memory. In 1972, IBM changed direction, announcing that the option would be made available on all S/370 models, and also announcing several virtual storage operating systems, including VM/370. By the mid-1970s, CP/CMS, VM, and the maverick VP/CSS were running on a numerous large IBM mainframes. By the late 80s, there were reported to be more VM licenses than MVS licenses.
Year 1999
On February 8, 1999, VMware introduced the first x86 virtualization product, VMware Virtual Platform, based on earlier research by its founders at Stanford University.
VMware Virtual Platform was based on software emulation with Guest/Host OS design that required all Guest environments be stored as files under the host OS filesystem.
Year 2008
VMware releases VMware Workstation 6.5 beta, the first program for Windows and Linux to enable DirectX 9 accelerated graphics on Windows XP guests .
Overview
As an overview, there are three levels of virtualization:
At the hardware level, the VMs can run multiple guest OSes. This is best used for testing and training that require networking interoperability between more than one OSes, since not only can the guest OSes be different from the host OS, there can be as many guest OS as VMs, as long as there is enough CPU, RAM and storage space. IBM introduced this around 1990 under the name logical partitioning (LPAR), at first only in the mainframe field.
At the operating system level, it can only virtualize one OS: the guest OS is the host OS. This is similar to having many terminal server sessions without locking down the desktop. Thus, this is the best of both worlds, having the speed of a TS session with the benefit of full access to the desktop as a virtual machine, where the user can still control the quotas for CPU, RAM and HDD. Similar to the hardware level, this is still considered a Server Virtualization where each guest OS has its own IP address, so it can be used for networking applications such as web hosting.
At the application level, it is running on the Host OS directly, without any guest OS, which can be in a locked down desktop, including in a terminal server session. This is called Application Virtualization or Desktop Virtualization, which virtualizes the front-end, whereas Server Virtualization virtualizes the back-end. Now, Application Streaming refers to delivering applications directly onto the desktop and running them locally. Traditionally in terminal server computing, the applications are running on the server, not locally, and streaming the screenshots onto the desktop.
Application virtualization
Application virtualization solutions such as VMware ThinApp, Softricity, and Trigence attempt to separate application specific files and settings from the host operating system, thus allowing them to run in more-or-less isolated sandboxes without installation and without the memory and disk overhead of full machine virtualization. Application virtualization is tightly tied to the host OS and thus does not translate to other operating systems or hardware. VMware ThinApp and Softricity are Intel Windows centric, while Trigence supports Linux and Solaris. Unlike machine virtualization, Application virtualization does not use code emulation or translation so CPU related benchmarks run with no changes, though filesystem benchmarks may experience some performance degradation. On Windows, VMware ThinApp and Softricity essentially work by intercepting filesystem and registry requests by an application and redirecting those requests to a preinstalled isolated sandbox, thus allowing the application to run without installation or changes to the local PC. Though VMware ThinApp and Softricity both began independent development around 1998, behind the scenes VMware ThinApp and Softricity are implemented using different techniques:
VMware ThinApp works by packaging an application into a single "packaged" EXE which includes the runtime plus the application data files and registry. VMware ThinApp’s runtime is loaded by Windows as a normal Windows application, from there the runtime replaces the Windows loader, filesystem, and registry for the target application and presents a merged image of the host PC as if the application had been previously installed. VMware ThinApp replaces all related API functions for the host application, for example the ReadFile API supplied to the application must pass through VMware ThinApp before it reaches the operating system. If the application is reading a virtual file, VMware ThinApp handles the request itself otherwise the request will be passed on to the operating system. Because VMware ThinApp is implemented in user-mode without device drivers and it does not have a client that is preinstalled, applications can run directly from USB Flash or network shares without previously needing elevated security privileges.
Softricity (acquired by Microsoft) operates on a similar principle using device drivers to intercept file request in ring0 at a level closer to the operating system. Softricity installs a client in Administrator mode which can then be accessed by restricted users on the machine. An advantage of virtualizing at the kernel level is the Windows Loader (responsible for loading EXE and DLL files) does not need to be reimplemented and greater application compatibility can be achieved with less work (Softricity claims to support most major applications). A disadvantage for ring0 implementation is it requires elevated security privileges to be installed and crashes or security defects can occur system wide rather than being isolated to a specific application.
Because Application Virtualization runs all application code natively, it can only provide security guarantees as strong as the host OS is able to provide. Unlike full machine virtualization, Application virtualization solutions currently do not work with device drivers and other code that runs at ring0 such as virus scanners. These special applications must be installed normally on the host PC to function.
Managed runtimes
Another technique sometimes referred to as virtualization, is portable byte code execution using a standard portable native runtime (aka Managed Runtimes). The two most popular solutions today include Java and .NET. These solutions both use a process called JIT (Just in time) compilation to translate code from a virtual portable Machine Language into the local processor's native code. This allows applications to be compiled for a single architecture and then run on many different machines. Beyond machine portable applications, an additional advantage to this technique includes strong security guarantees. Because all native application code is generated by the controlling environment, it can be checked for correctness (possible security exploits) prior to execution. Programs must be originally designed for the environment in question or manually rewritten and recompiled to work for these new environments. For example, one cannot automatically convert or run a Windows / Linux native app on .NET or Java. Because portable runtimes try to present a common API for applications for a wide variety of hardware, applications are less able to take advantage of OS specific features. Portable application environments also have higher memory and CPU overheads than optimized native applications, but these overheads are much smaller compared with full machine virtualization. Portable Byte Code environments such as Java have become very popular on the server where a wide variety of hardware exist and the set of OS-specific APIs required is standard across most Unix and Windows flavors. Another popular feature among managed runtimes is garbage collection, which automatically detects unused data in memory and reclaims the memory without the developer having to explicitly invoke "free" operations.
Neutral view of application virtualization
Given the industry-bias of the past, to be more neutral, there are also two other ways to look at the Application Level:
The first type is application packagers (VMware ThinApp, Softricity) whereas the other is application compilers (Java and .NET). Because it is a packager, it can be used to stream applications without modifying the source code, whereas the latter can only be used to compile the source code.
Another way to look at it is from the Hypervisor point of view. The first one is "hypervisor" in user mode, whereas the other is "hypervisor" in runtime mode. The hypervisor was put in quotation, because both of them have similar behavior in that they intercept system calls in a different mode: user mode; and runtime mode. The user mode intercepts the system calls from the runtime mode before going to kernel mode. The real hypervisor only needs to intercept the system call using hypercall in kernel mode. Hopefully, once Windows has a Hypervisor, Virtual machine monitor, there may even be no need for JRE and CLR. Moreover, in the case of Linux, maybe the JRE can be modified to run on top of the Hypervisor as a loadable kernel module running in kernel mode, instead of the having slow legacy runtime in user mode. Now, if it were running on top of the Linux Hypervisor directly, then it should be called Java OS, not just another runtime mode JIT.
Mendel Rosenblum called the runtime mode a High-level language virtual machine in August 2004. However, at that time, the first type, intercepting system calls in user mode, was irresponsible and unthinkable, so he didn't mention it in his article. Hence, Application Streaming was still mysterious in 2004. Now, when the JVM, no longer High-level language virtual machines, becomes Java OS running on Linux Hypervisor, then Java Applications will have a new level of playing field, just as Windows Applications already has with Softricity.
In summary, the first one is virtualizing the Binary Code so that it can be installed once and run anywhere, whereas the other is virtualizing the source code using Byte code or Managed code so that it can be written once and run anywhere. Both of them are actually partial solutions to the twin portability problems of: application portability; and source code portability. Maybe it is time to combine the two problems into one complete solution at the hypervisor level in the kernel mode.
Further development
Microsoft bought Softricity on July 17, 2006, and popularized Application Streaming, giving traditional Windows applications a level playing field with Web and Java applications with respect to the ease of distribution (i.e. no more setup required, just click and run). Soon every JRE and CLR can run virtually in user mode, without kernel mode drivers being installed, such that there can even be multiple versions of JRE and CLR running concurrently in RAM.
The integration of the Linux Hypervisor into the Linux Kernel and that of the Windows Hypervisor into the Windows Kernel may make rootkit techniques such as the filter driver obsolete.
This may take a while as the Linux Hypervisor is still waiting for the Xen Hypervisor and VMware Hypervisor to be fully compatible with each other as Oracle impatiently pounding at the door to let the Hypervisor come into the Linux Kernel so that it can full steam ahead with its Grid Computing life. Meanwhile, Microsoft have decided to be fully compatible with the Xen Hypervisor
. IBM, of course, doesn't just sit idle as it is working with VMware for the x86 servers, and possibly helping Xen to move from x86 into Power ISA using the open source rHype.
Now, to make the Hypervisor party into a full house, Intel VT-x and AMD-V are hoping to ease and speed up paravirtualization so that a guest OS can be run unmodified.
See also
Comparison of platform virtualization software
Comparison of application virtual machines
Emulator
Hypervisor
IBM SAN Volume Controller
Operating system-level virtualization
Physical-to-Virtual
Virtual machine monitor
Virtual tape library
X86 virtualization
References
External links
Application Virtualization: Streamlining Distribution August 31, 2006—By James Drews
Windows Virtualization from Microsoft
Virtualization Overview from VMware
An introduction to Virtualization
Weblog post on the how virtualization can be used to implement Mandatory Access Control.
The Effect of Virtualization on OS Interference in PDF format.
VM/360 history
VM/360 history
Virtualization
Virtualization software |
54249476 | https://en.wikipedia.org/wiki/Body%20Labs | Body Labs | Body Labs is a Manhattan-based software company founded in 2013. Body Labs is a software provider of human-aware artificial intelligence that understands the 3D body shape and motion of people from RGB photos or videos.
In October 2017, the company was acquired by Amazon.
History
Body Labs was founded by Michael J. Black, William J. O'Farrell, Eric Rachlin, and Alex Weiss who were connected at Brown University and Max Planck Institute for Intelligent Systems.
In 2002, Black was researching how to create a statistical model of the human body. While Black was teaching a course on computer vision at Brown University, the Virginia State Police contacted him about a robbery and murder at a 7-Eleven. The police wanted to use computer vision to identify the suspect in a surveillance video. By creating a statistical model, Black's group could vindicate some of the evidence in the case like confirming the suspect's height.
On November 13, 2014, Body Labs announced $2.2 million in Seed funding led by FirstMark Capital, with additional investors including New York Angels and existing investors.
On November 3, 2015, Body Labs announced $11 million in Series A funding led by Intel Capital, with additional investors including FirstMark Capital, Max-Planck-Innovation GmbH, Osage University Partners, Catalus Capital and the company founders.
Products
BodyKit
On March 3, 2015, Body Labs launched BodyKit, a collection of API’s and embeddable components for integrating the human body into apps and tools.
Body Labs Blue
On July 20, 2016, Body Labs launched Body Labs Blue, an API and embeddable Web interface that takes physical measurements and predicts additional digital measurements to help with custom clothing creation.
Body Labs Red
On October 5, 2016, Body Labs launched Body Labs Red, an API for automatically processing 3D scans into a full 3D body model. Additionally, Body Labs announced a partnership with 3dMD to process their 3D scans.
Mosh Mobile App
On Feb. 15, 2017, Body Labs released Mosh on the App Store, an Apple iOS app, the predicts the 3D human pose and shape of a subject and renders 3D effects on them.
SOMA: Human-Aware AI
On June 1, 2017, Body Labs launched SOMA, software that uses artificial intelligence to predict 3D human shape and motion from RGB photos or video.
On July 21, 2017, Body Labs launched SOMA Shape API for 3D model and Measurement Prediction. Shape API allows third party apps to easily connect to the SOMA backend.
References
Amazon (company) acquisitions
Software companies based in New York City
Companies based in Manhattan
Software companies established in 2013
2013 establishments in New York City
American companies established in 2013
2017 mergers and acquisitions
Software companies of the United States |
3776164 | https://en.wikipedia.org/wiki/Basic%20Interoperable%20Scrambling%20System | Basic Interoperable Scrambling System | Basic Interoperable Scrambling System, usually known as BISS, is a satellite signal scrambling system developed by the European Broadcasting Union and a consortium of hardware manufacturers.
Prior to its development, "ad hoc" or "occasional use" satellite news feeds were transmitted either using proprietary encryption methods (e.g. RAS, or PowerVu), or without any encryption. Unencrypted satellite feeds allowed anyone with the correct equipment to view the program material.
Proprietary encryption methods were determined by encoder manufacturers, and placed major compatibility limitations on the type of satellite receiver (IRD) that could be used for each feed. BISS was an attempt to create an "open platform" encryption system, which could be used across a range of manufacturers equipment.
There are mainly two different types of BISS encryption used:
BISS-1 transmissions are protected by a 12 digit hexadecimal "session key" that is agreed by the transmitting and receiving parties prior to transmission. The key is entered into both the encoder and decoder, this key then forms part of the encryption of the digital TV signal and any receiver with BISS-support with the correct key will decrypt the signal.
BISS-E (E for encrypted) is a variation where the decoder has stored one secret BISS-key entered by for example a rightsholder. This is unknown to the user of the decoder. The user is then sent a 16-digit hexadecimal code, which is entered as a "session key". This session key is then mathematically combined internally to calculate a BISS-1 key that can decrypt the signal.
Only a decoder with the correct secret BISS-key will be able to decrypt a BISS-E feed. This gives rightsholder control as to exactly which decoder can be used to decrypt/decode a specific feed. Any BISS-E encrypted feed will have a corresponding BISS-1 key that will unlock it.
BISS-E is amongst others used by EBU to protect UEFA Champions League and other high-profile satellite feeds.
External links
BISS-E Technical Specification by EBU
Digital television
Digital rights management systems
Broadcast engineering
Satellite broadcasting
Conditional-access television broadcasting |
818031 | https://en.wikipedia.org/wiki/Yahoo%21%20Music%20Radio | Yahoo! Music Radio | Yahoo! Music Radio (formerly known as LAUNCHcast) was an Internet radio service offered by Clear Channel Communications' iHeartRadio through Yahoo! Music. The service, formerly offered by LAUNCH Media, and originally developed by Todd Beaupré, Jason Snyder and Jeff Boulter, debuted on November 11, 1999, and was purchased by Yahoo! on June 28, 2001. Previously, LAUNCHcast combined with CBS Radio beginning in 2009, then iHeartRadio in 2012. The service closed in early 2014.
2001–2009: LAUNCHcast powered by Yahoo! Music
LAUNCHcast allowed users to create personal radio stations or playlists of songs tailored to their musical tastes.
To create a personal station, users rated music on a 4-star or 100-point (depending on one's preference) scale. The service used those ratings to create a personal station of songs based on a user's favorite genres, artists, albums, and songs. The generated playlist contained a combination of rated and recommended songs. The ratio of rated/recommended songs could be specified by each user, but by default it was 50/50.
A recommendation engine suggested songs that might have matched a user's particular musical taste according to the following similarity criteria:
Songs from the same artist
Songs from the same album
Songs from the same genre
Songs recommended by users with similar musical tastes
Songs recommended by Yahoo!
Users were not required to participate in the ratings system to listen to music. Pre-programmed stations based on theme, genre, or artist were available throughout the Yahoo Music website.
Music videos could also be rated, allowing users to create personal music video channels as well. For legal reasons, specific songs could not be played whenever one wished. However, videos could be. The service could generate a personal video channel based on a single selection.
Users also have the option to turn off explicit lyrics while listening to their customized stations.
Free accounts
Users could share their personal stations publicly and listen to other users' stations.
When LAUNCHcast plus was implemented in January 2003, music was available for streaming for free at "Low" or "Medium" quality; although in 2007, these were combined into "Standard".
Between tracks, free accounts would hear commercial advertising for the Yahoo service and its partners and affiliates. The advertisements were generally 30 seconds. In 2007 Yahoo added permanent banner ads to the LAUNCHcast player.
Because LAUNCHcast was only compatible with Internet Explorer, an alternative was to use the Yahoo Music Engine, which was called Jukebox in version 2 of the same software. The Jukebox was unable to stream music anymore following September 2008, although it remained available for download well into the following year.
Limited skipping was available, at up to 5 skips per hour. Previously, banning a song skipped the song automatically, but this was removed in October 2003 with a redesign of their LAUNCHcast player. If the skips were not used in the previous hour, they did not roll over.
Free accounts were limited to playing up to 1000 songs/mo (up to 12,000/yr) without any special restrictions. A song could be skipped to bypass an undesired track, but skipped songs counted against the monthly allowance. If a free account user exceeded the monthly limit, the user would no longer be able to listen to LAUNCHcast radio for the remainder of the month, although they could listen to their personal station with no skips and at a lower bandwidth. Like skips, songs did not roll over to the next month.
Free users had access to only specific stations labeled "free". Such stations had a yellow headphones icon whereas premium stations had a blue plus icon.
Pausing was only possible after 30 seconds into the song, although a song could be skipped before the 30 seconds by pressing "stop" and then starting the station again.
LAUNCHcast Plus
On January 29, 2003, Yahoo has introduced a premium version of the LAUNCHcast service called LAUNCHcast Plus. Some users subscribed to this service on a monthly ($3.99/mo [$47.88/yr]) or annual basis ($35.88/yr [$2.99/mo]), or it came as bundled software from some ISPs (included in price) such as Verizon Yahoo online services. In addition to the features offered by the free account, LAUNCHcast Plus users received the following additional benefits:
"High" quality sound (CD-quality)
No commercials or banner ads
Access to all LAUNCHcast pre-programmed stations
Unlimited skipping
Unlimited monthly listening
Access to all artists, songs, and albums (subject to licensing restrictions by country)
The ability to designate other user's stations as "influencers" of one's own personal radio station
The ability to create "moods" (genre-based subsets of a user's personal radio station)
Pausing whenever you want
LAUNCHcast Plus was only offered in the US and Canada through Yahoo. On November 2, 2008, Verizon Yahoo announced via e-mail that certain services would be discontinued including LAUNCHcast Plus. In an e-mail delivered in January 2009, Yahoo states "the LAUNCHcast Plus premium service will be closing on February 12, 2009.". LAUNCHcast Plus was available to AT&T and Verizon subscribers at no charge previously.
2009–2012: LAUNCHcast/Yahoo! Music Radio powered by CBS
With the rise of royalty rates, Yahoo signed a deal with CBS Radio that effectively eliminated LAUNCHcast as it had previously existed, replacing it with Yahoo's 150 pre-programmed stations as well as CBS's local music, news/talk, and sports stations, and went by the name LAUNCHcast powered by CBS Radio. Yahoo Q&A pages attempted to downplay loss of functionality those changes entailed. Personalized stations ceased to exist, though for a short time Yahoo! did save previous users' song, artist and album ratings. Since the new format organized radio stations via genre, listeners had very limited range in what music they hear unless they regularly switch from station to station. The change also eliminated the feature that suggested songs and artists based on the user's ratings. Listeners had the option to listen to those stations in high quality (broadband) audio as well as using the 6 skips-per-hour (not applicable on local stations). One way around the inability to skip songs was to simply hit the browser's refresh button. While it took time for the page to reload, it was faster than waiting for a song to finish.
Listeners were allowed to use the out-of-five rating feature that influenced the stations. While banning a song entirely was impossible, giving a song a one-star rating had it played very rarely. The rating tool was discontinued from the Yahoo Music Radio player on September 21 for a time until it was restored on November 2. Ratings in the new player were not saved back into Yahoo! users listings, and as of 2012, Yahoo Music and CBS Radio did not associate radio ratings with their profiles. Yahoo encouraged users to rate songs, artists, and albums throughout their site as well as through their recommendations based on their tastes, though there was little benefit to the listener in doing so. In its first incarnation, play-on-demand was not provided, as with rewind, playback, and fast-forward.
Many ads could be skipped. However, some cannot and many disable all buttons, forcing the listener to hear the ad before any music is played. Ads which can be skipped have no label or video. However, skipping ads accounts toward the hourly skip limit. Ads disabling all buttons are, as of March 2009, advertising hair products. Such ads often play upon the player's launch, and some of them have a video, which caused problems due to increased memory usage. An occasional Nesquik ad disabled the pause and skip buttons, but the channel can be changed. Refreshing the web page would generally work to skip adds.
As with the old service, unused skips did not roll-over. However, LAUNCHcast by CBS provided unlimited listening.
For the first time, LAUNCHcast powered by CBS Radio was also available to Firefox and Safari users. It was also available as an app on the iPhone.
The fan radio feature has returned to LAUNCHcast 5 months after CBS' takeover. Listeners could access the fan stations in the artist page by clicking on the "Artist Radio" link that corresponded with the artist/group. They also had the option to type in their favorite artist in the player itself.
Unlike its old service, Yahoo Radio by CBS did not have the option for users to turn off explicit lyrics. Although a hard rock or a hip-hop station may have edited content, some explicit songs were mixed in there. The user would either tolerate such raw language, skip the song, or change the station.
Since the merger, the LAUNCHcast branding slowly diminished, although LAUNCHcast was still verbally mentioned during some of their commercial breaks until March 2010.
On February 4, 2010, Yahoo Music Radio banned users outside the U.S. from streaming online radio. An error message points to Last FM. "We're sorry, this station is unavailable from your current location. Instead, enjoy listening to...."
In July 2010, it was renamed as Yahoo! Music Radio powered by Radio.com with the launch of CBS Radio's Radio.com service.
As of July 2011, many Yahoo users reported that the LAUNCHcast plugin for Yahoo Messenger no longer worked. User attempts to contact customer service have been unsuccessful.
2012–2013/14: Yahoo Music Radio powered by iHeartRadio
On June 28, 2012, Yahoo Music severed ties with CBS Radio and formed a new alliance with Clear Channel's iHeartRadio. Much of the pre-programmed stations on Yahoo Music's roster has been eliminated and replaced by over 1,000 live broadcast and digital-only stations. For a couple examples: the "Indie Rock" station has been redirected to Los Angeles-based KCRW's Eclectic24 station, and "Today's Big Hits" has been redirected to WHYI-FM in Miami, Florida or any suggested Top 40/KISS-FM station. Unlike Yahoo! Radio's previous 2 services, listeners use the "Thumbs Up/Thumbs Down" rating tool to rate the songs they like or don't. The new deal also gives users to create personalized stations based on their favorite artist or song. Users have the option to turn off explicit lyrics, but that would disable customization altogether. To rate and create customized channels, Yahoo users would have to sign up with iHeartRadio.
This made Yahoo the exclusive web and mobile destination for fans of the 2012 iHeartRadio Music Festival, which took place Sept. 21–22 at the MGM Grand Hotel in Las Vegas, Nevada. Additionally, Yahoo became Clear Channel's web and mobile live webcast partner for 11 more live events that year.
Closure
Although Yahoo!'s current deal with iHeartRadio continues, sometime in late 2013/early 2014 Yahoo! Music's radio page has been discontinued without an announcement, redirecting visitors to the main Yahoo! Music page. This signified Yahoo! Music's end of its internet radio services. Yahoo's Music page would also meet its demise 4 years later as it merged with Yahoo's Entertainment page.
Geographic availability
The free version of LAUNCHcast was available in most areas of the world. However, content was varied by country due to music licensing restrictions.
The LAUNCHcast Plus premium service was widely available in the United States and Canada. In the United Kingdom it was restricted to BT Yahoo! Internet customers.
In Canada, LAUNCHcast and LAUNCHcast Plus was dismantled altogether as of April 15, 2009. In Australia, their LAUNCHcast service was rebranded "Yahoo Music Radio" until they dismantled it on July 7, 2009. Many other countries followed suit prior to the relaunch of Yahoo Music worldwide sites.
Technological requirements
The LAUNCHcast music player (from development date to February 2009) required Microsoft's Windows Media Player 9.0 or higher to function, although it could not be streamed from the Windows Media Player itself. Before the merger with CBS Radio, LAUNCHcast only worked with Microsoft's Internet Explorer 6.0 and up web browser with Flash 6.0 or higher, and in Yahoo!'s Messenger and Music Engine programs on the Microsoft Windows operating system 98, ME, 2000 Professional, XP (Home and Professional). In Firefox web browser, LAUNCHcast did not load properly or without extra configuration work.
According to Yahoo, the LAUNCHcast music player was not compatible with the Mac OS X or Linux operating systems, however as of February 16, 2009; that wasn't the case anymore. Yahoo stated that following CBS's acquisition, loading the player in Firefox would become possible.
Since the relaunch of LAUNCHcast by CBS, users were only required to download the latest Flash Player plug-in (currently Version 10). See external links (below) for Yahoo's help page on system requirements. The requirement still stands with the Yahoo/iHeartRadio merger.
Video
Before moving to only use Windows Media, early video playback offered the choice of Windows Media Player or RealPlayer, of which the latter was cross-platform and available for Linux.
After Windows Media was made the only delivery format, requiring Windows Media Player 6.4 or 7.1 at the minimum, Firefox failed to play videos. To work around this, users had to carefully set up a specialised Windows Media ActiveX extension for Firefox, and (temporarily) tweak the browser user agent with the User Agent Switcher extension to identify as Internet Explorer 6.0 in order play content. This only worked in Windows and not in Linux.
Browser and operating system compatibility issues were largely rectified after the default player was changed to Flash Player, which was cross-platform to the extent, that it supported at least Windows, Mac, and Linux.
Legal troubles
On April 27, 2007, Yahoo defeated Sony BMG in a copyright infringement lawsuit involving LAUNCHcast's personalization features. At issue was whether or not LAUNCHcast's "personal radio station" constitutes an "interactive" service, which requires a negotiated license agreement with a record company, or a "non-interactive" service, which requires a cheaper "compulsory license" from SoundExchange. In an "interactive" service, users can play songs on demand, but with LAUNCHcast they can only influence whether or not a particular song appears in their station.
After a six-year litigation, a jury decided that LAUNCHcast was not required to negotiate licenses as an "interactive" service, and that the service's compulsory licenses as a "non-interactive" service were sufficient. The plaintiffs appealed the decision but on August 21, 2009 the United States Court of Appeals for the Second Circuit upheld the lower court's decision, finding that users did not have sufficient control over the playlists generated by LAUNCHcast to render it an "interactive service".
See also
Yahoo Music
References
Music Radio
Internet radio stations in the United States
Radio stations established in 1999
Radio stations disestablished in 2014
Defunct radio stations in the United States |
74255 | https://en.wikipedia.org/wiki/Pac-Man | Pac-Man | is a 1980 maze action video game developed and released by Namco for arcades. The original Japanese title of Puck Man was changed to Pac-Man for international releases as a preventative measure against defacement of the arcade machines by changing the P to an F. In North America, the game was released by Midway Manufacturing as part of its licensing agreement with Namco America. The player controls Pac-Man, who must eat all the dots inside an enclosed maze while avoiding four colored ghosts. Eating large flashing dots called "Power Pellets" causes the ghosts to temporarily turn blue, allowing Pac-Man to eat them for bonus points.
Game development began in early 1979, directed by Toru Iwatani with a nine-man team. Iwatani wanted to create a game that could appeal to women as well as men, because most video games of the time had themes of war or sports. Although the inspiration for the Pac-Man character was the image of a pizza with a slice removed, Iwatani has said he also rounded out the Japanese character for mouth, kuchi (). The in-game characters were made to be cute and colorful to appeal to younger players. The original Japanese title of Puckman was derived from the titular character's hockey-puck shape, and is now the mascot and flagship icon of Bandai Namco Entertainment.
Pac-Man was a widespread critical and commercial success, leading to several sequels, merchandise, and two television series, as well as a hit single by Buckner & Garcia. The franchise remains one of the highest-grossing and best-selling games, generating more than $14 billion in revenue () and 43 million units in sales combined, and has an enduring commercial and cultural legacy, commonly listed as one of the greatest video games of all time.
Gameplay
Pac-Man is an action maze chase video game; the player controls the eponymous character through an enclosed maze. The objective of the game is to eat all of the dots placed in the maze while avoiding four colored ghosts — Blinky (red), Pinky (pink), Inky (cyan), and Clyde (orange) — that pursue him. When Pac-Man eats all of the dots, the player advances to the next level. If Pac-Man makes contact with a ghost, he will lose a life; the game ends when all lives are lost. Each of the four ghosts have their own unique, distinct artificial intelligence (A.I.), or "personalities"; Blinky gives direct chase to Pac-Man, Pinky and Inky try to position themselves in front of Pac-Man, usually by cornering him, and Clyde will switch between chasing Pac-Man and fleeing from him.
Placed at the four corners of the maze are large flashing "energizers", or "power pellets". Eating these will cause the ghosts to turn blue with a dizzied expression and reverse direction. Pac-Man can eat blue ghosts for bonus points; when eaten, their eyes make their way back to the center box in the maze, where the ghosts "regenerate" and resume their normal activity. Eating multiple blue ghosts in succession increases their point value. After a certain amount of time, blue-colored ghosts will flash white before turning back into their normal, lethal form. Eating a certain number of dots in a level will cause a bonus item - usually in the form of a fruit – to appear underneath the center box, which can be eaten for bonus points.
The game increases in difficulty as the player progresses; the ghosts become faster and the energizers' effect decreases in duration to the point where the ghosts will no longer turn blue and edible. To the sides of the maze are two "warp tunnels", which allow Pac-Man and the ghosts to travel to the opposite side of the screen. Ghosts become slower when entering and exiting these tunnels. Levels are indicated by the fruit icon at the bottom of the screen. In-between levels are short cutscenes featuring Pac-Man and Blinky in humorous, comical situations. The game becomes unplayable at the 256th level due to an integer overflow that affects the game's memory.
Development
After acquiring the struggling Japanese division of Atari in 1974, video game developer Namco began producing its own video games in-house, as opposed to simply licensing them from other developers and distributing them in Japan. Company president Masaya Nakamura created a small video game development group within the company and ordered them to study several NEC-produced microcomputers to potentially create new games with. One of the first people assigned to this division was a young 24-year-old employee named Toru Iwatani. He created Namco's first video game Gee Bee in 1978, which while unsuccessful helped the company gain a stronger foothold in the quickly-growing video game industry. He also assisted in the production of two sequels, Bomb Bee and Cutie Q, both released in 1979.
The Japanese video game industry had surged in popularity with games such as Space Invaders and Breakout, which led to the market being flooded with similar titles from other manufacturers in an attempt to cash in on the success. Iwatani felt that arcade games only appealed to men for their crude graphics and violence, and that arcades in general were seen as seedy environments. For his next project, Iwatani chose to create a non-violent, cheerful video game that appealed mostly to women, as he believed that attracting women and couples into arcades would potentially make them appear to be much-more family friendly in tone. Iwatani began thinking of things that women liked to do in their time; he decided to center his game around eating, basing this on women liking to eat desserts and other sweets. His game was initially called Pakkuman, based on the Japanese onomatopoeia term “paku paku taberu”, referencing the mouth movement of opening and closing in succession.
The game that later became Pac-Man began development in early 1979 and took a year and five months to complete, the longest ever for a video game up to that point. Iwatani enlisted the help of nine other Namco employees to assist in production, including composer Toshio Kai, programmer Shigeo Funaki, and hardware engineer Shigeichi Ishimura. Care was taken to make the game appeal to a “non-violent” audience, particularly women, with its usage of simple gameplay and cute, attractive character designs. When the game was being developed, Namco was underway with designing Galaxian, which utilized a then-revolutionary RGB color display, allowing sprites to use several colors at once instead of utilizing colored strips of cellophane that was commonplace at the time; this technological accomplishment allowed Iwatani to greatly enhance his game with bright pastel colors, which he felt would help attract players. The idea for energizers was a concept Iwatani borrowed from Popeye the Sailor, a cartoon character that temporarily acquires superhuman strength after eating a can of spinach; it is also believed that Iwatani was also partly inspired by a Japanese children's story about a creature that protected children from monsters by devouring them. Frank Fogleman, the co-founder of Gremlin Industries, believes that the maze-chase gameplay of Pac-Man was inspired by Sega's Head On (1979), a similar arcade game that was popular in Japan.
Iwatani has often claimed that the character of Pac-Man himself was designed after the shape of a pizza with a missing slice while he was at lunch; in a 1986 interview he said that this was only half-truth, and that the Pac-Man character was also based on him rounding out and simplifying the Japanese character “kuchi” (口), meaning “mouth”. The four ghosts were made to be cute, colorful and appealing, utilizing bright, pastel colors and expressive blue eyes. Iwatani had used this idea before in Cutie Q, which features similar ghost-like characters, and decided to incorporate it into Pac-Man. He was also inspired by the television series Casper the Friendly Ghost and the manga Obake no Q-Taro. Ghosts were chosen as the game's main antagonists due to them being used as villainous characters in animation. The idea for the fruit bonuses was based on graphics displayed on slot machines, which often use symbols such as cherries and bells.
Originally, Namco president Masaya Nakamura had requested that all of the ghosts be red and thus indistinguishable from one another. Iwatani believed that the ghosts should be different colors, and he received unanimous support from his colleagues for this idea. Each of the ghosts were programmed to have their own distinct personalities, so as to keep the game from becoming too boring or impossibly difficult to play. Each ghost's name gives a hint to its strategy for tracking down Pac-Man: Shadow ("Blinky") always chases Pac-Man, Speedy ("Pinky") tries to get ahead of him, Bashful ("Inky") uses a more complicated strategy to zero in on him, and Pokey ("Clyde") alternates between chasing him and running away. (The ghosts' Japanese names, translated into English, are Chaser, Ambusher, Fickle, and Stupid, respectively.) To break up the tension of constantly being pursued, humorous intermissions between Pac-Man and Blinky were added. The sound effects were among the last things added to the game, created by Toshio Kai. In a design session, Iwatani noisily ate fruit and made gurgling noises to describe to Kai how he wanted the eating effect to sound. Upon completion, the game was titled Puck Man, based on the working title and the titular character's distinct hockey puck-like shape.
Release
Location testing for Puck Man began on May 22, 1980 in Shibuya, Tokyo, to a relatively positive fanfare from players. A private showing for the game was done in June, followed by a nationwide release in July. Eyeing the game's success in Japan, Namco initialized plans to bring the game to international countries, particularly the United States. Before showing the game to distributors, Namco America made a number of changes, such as altering the names of the ghosts. The biggest of these was the game's title; executives at Namco were worried that vandals would change the “P” in Puck Man to an “F”, forming an obscene name. Masaya Nakamura chose to rename it to Pac-Man, as he felt it was closer to the game's original Japanese title of Pakkuman. In Europe, the game was released under both titles, Pac-Man and Puck Man.
When Namco presented Pac-Man and Rally-X to potential distributors at the 1980 AMOA tradeshow in November, executives believed that Rally-X would be the best-selling game of that year. According to Play Meter magazine, both Pac-Man and Rally-X received mild attention at the show. Namco had initially approached Atari to distribute Pac-Man, but Atari refused the offer. Midway Manufacturing subsequently agreed to distribute both Pac-Man and Rally-X in North America, announcing their acquisition of the manufacturing rights on November 22 and releasing them in December.
Conversions
Pac-Man was ported to a plethora of home video game systems and personal computers; the most infamous of these is the 1982 Atari 2600 conversion, designed by Tod Frye and published by Atari. This version of the game was widely criticized for its inaccurate portrayal of the arcade version and for its peculiar design choices, most notably the flickering effect of the ghosts. However, it was a commercial success, having sold over seven million copies. Atari also released versions for the Intellivision, Commodore VIC-20, Commodore 64, Apple II, IBM PC, Texas Instruments TI-99/4A, ZX Spectrum, and the Atari 8-bit family of computers. A port for the Atari 5200 was released in 1983, a version that many have seen as a significant improvement over the Atari 2600 version.
Namco themselves released a version for the Family Computer in 1984 as one of the console's first third-party titles, as well as a port for the MSX computer. The Famicom version was later released in North America for the Nintendo Entertainment System by Tengen, a subsidiary of Atari Games. Tengen also produced an unlicensed version of the game in a black cartridge shell, released during a time where Tengen and Nintendo were in bitter disagreements over the latter's stance on quality control for their consoles; this version was later re-released by Namco as an official title in 1993, featuring a new cartridge label and box. The Famicom version was released for the Famicom Disk System in 1990 as a budget title for the Disk Writer kiosks in retail stores. The same year, Namco released a port of Pac-Man for the Game Boy, which allowed for two-player co-operative play via the Game Link Cable peripheral. A version for the Game Gear was released a year later, which also enabled support for multiplayer. In celebration of the game's 20th anniversary in 1999, Namco re-released the Game Boy version for the Game Boy Color, bundled with Pac-Attack and titled Pac-Man: Special Color Edition. The same year, Namco and SNK co-published a port for the Neo Geo Pocket Color, which came with a circular "Cross Ring" that attached to the d-pad to restrict it to four-directional movement.
In 2001, Namco released a port of Pac-Man for various Japanese mobile phones, being one of the company's first mobile game releases. The Famicom version of the game was re-released for the Game Boy Advance in 2004 as part of the Famicom Mini series, released to commemorate the 25th anniversary of the Famicom; this version was also released in North America and Europe under the Classic NES Series label. Namco Networks released Pac-Man for BREW mobile devices in 2005. The arcade original was released for the Xbox Live Arcade service in 2006, featuring achievements and online leaderboards. In 2009 a version for iOS devices was published; this release was later rebranded as Pac-Man + Tournaments in 2013, featuring new mazes and leaderboards. The NES version was released for the Wii Virtual Console in 2007. A Roku version was released in 2011, alongside a port of the Game Boy release for the 3DS Virtual Console. Pac-Man was one of four titles released under the Arcade Game Series brand, which was published for the Xbox One, PlayStation 4 and PC in 2016. In 2021, along with Xevious, it was also released by Hamster Corporation for the Nintendo Switch and PlayStation 4 as part of the Arcade Archives series, marking the first of two Namco games to be included as part of the series.
Pac-Man is included in many Namco compilations, including Namco Museum Vol. 1 (1995), Namco Museum 64 (1999), Namco Museum Battle Collection (2005), Namco Museum DS (2007), Namco Museum Essentials (2009), and Namco Museum Megamix (2010). In 1996, it was re-released for arcades as part of Namco Classic Collection Vol. 2, alongside Dig Dug, Rally-X and special "Arrangement" remakes of all three titles. Microsoft included Pac-Man in Microsoft Return of Arcade (1995) as a way to help attract video game companies to their Windows 95 operating system. Namco released the game in the third volume of Namco History in Japan in 1998. The 2001 Game Boy Advance compilation Pac-Man Collection compiles Pac-Man, Pac-Mania, Pac-Attack and Pac-Man Arrangement onto one cartridge. Pac-Man is also a hidden extra in the arcade game Ms. Pac-Man/Galaga - Class of 1981 (2001). A similar cabinet was released in 2005 that featured Pac-Man as the centerpiece. Pac-Man 2: The New Adventures (1993) and Pac-Man World 2 (2002) have Pac-Man as an unlockable extra. Alongside the Xbox 360 remake Pac-Man Championship Edition, it was ported to the Nintendo 3DS in 2012 as part of Pac-Man & Galaga Dimensions. The 2010 Wii game Pac-Man Party and its 2011 3DS remake also include Pac-Man as a bonus game, alongside the arcade versions of Dig Dug and Galaga. In 2014, Pac-Man was included in the compilation title Pac-Man Museum for the Xbox 360, PlayStation 3 and PC, alongside several other Pac-Man games. The NES version is one of 30 games included in the NES Classic Edition.
Reception
Upon its North American debut at AMOA 1980, the game initially received a mild response. Play Meter magazine previewed the game and called it "a cute game which appears to grow on players, something which cute games are not prone to do." They said there's "more to the game than at first appears" but criticized the sound as a drawback, saying it's "good for awhile, then becomes annoying." Upon release, the game exceeded expectations with wide critical and commercial success.
Commercial performance
When it was first released in Japan, Pac-Man was initially only a modest success; Namco's own Galaxian (1979) had quickly outdone the game in popularity, due to the predominately male player base being familiar with its shooting gameplay as opposed to Pac-Mans cute characters and maze-chase theme. Pac-Man eventually became very successful in Japan, where it went on to be Japan's highest-grossing arcade game of 1980 according to the annual Game Machine charts, dethroning Space Invaders (1978) which had topped the annual charts for two years in a row and leading to a shift in the Japanese market away from space shooters towards action games featuring comical characters. Pac-Man was also Japan's fourth highest-grossing arcade game of 1981.
In North America, Pac-Man became a nationwide success. Within one year, more than 100,000 arcade units had been sold which grossed more than in quarters. Midway had limited expectations prior to release, initially manufacturing 5,000 units for the US, before it caught on immediately upon release there. Some arcades purchased entire rows of Pac-Man cabinets. It overtook Atari's Asteroids (1979) as the best-selling arcade game in the country, and surpassed the film Star Wars: A New Hope (1977) with more than in revenue. Pac-Man was America's highest-grossing arcade game of 1981, and second highest game of 1982. By 1982, it was estimated to have had 30 million active players across the United States. The game's success was partly driven by its popularity among female audiences, becoming "the first commercial videogame to involve large numbers of women as players" according to Midway's Stan Jarocki, with Pac-Man being the favorite coin-op game among female gamers through 1982. Among the nine arcade games covered by How to Win Video Games (1982), Pac-Man was the only one with females accounting for a majority of players.
The number of arcade units sold had tripled to 400,000 by 1982, receiving an estimated total of between seven billion coins and . In a 1983 interview, Nakamura said that though he did expect Pac-Man to be successful, "I never thought it would be this big." Pac-Man is the best-selling arcade game of all time (surpassing Space Invaders), with total estimated earnings ranging from coins and $3.5 billion ($7.7 billion adjusted for inflation) to ( adjusted for inflation) in arcades. Pac-Man and Ms. Pac-Man also topped the US RePlay cocktail arcade cabinet charts for 23 months, from February 1982 through 1983 up until February 1984.
The Atari 2600 version of the game sold over copies, making it the console's best-selling title. In addition, Coleco's tabletop mini-arcade unit sold over units in 1982, the Pac-Man Nelsonic Game Watch sold more than 500,000 units the same year, the Family Computer (Famicom) version and its 2004 Game Boy Advance re-release sold a combined 598,000 copies in Japan, the Atari 5200 version sold cartridges between 1986 and 1988, the Atari XE computer version sold copies in 1986 and 1990, Thunder Mountain's 1986 budget release for home computers received a Diamond certification from the Software Publishers Association in 1989 for selling over 500,000 copies, and mobile phone ports have sold over paid downloads . II Computing also listed the Atarisoft port tenth on the magazine's list of top Apple II games as of late 1985, based on sales and market-share data. , all versions of Pac-Man are estimated to have grossed a total of more than in revenue.
Accolades
Pac-Man was awarded "Best Commercial Arcade Game" at the 1982 Arcade Awards. Pac-Man also won the Video Software Dealers Association's VSDA Award for Best Videogame. In 2001, Pac-Man was voted the greatest video game of all time by a Dixons poll in the UK. The Killer List of Videogames listed Pac-Man as the most popular game of all time. The list aggregator site Playthatgame currently ranks Pac-Man as the #53rd top game of all-time & game of the year.
Impact
Pac-Man is considered by many to be one of the most influential video games of all time; The game established the maze chase game genre, the first video game with power-ups, and the individual ghosts have deterministic artificial intelligence (AI) that reacts to player actions. Pac-Man is considered one of the first video games to have demonstrated the potential of characters in the medium; its title character was the first original gaming mascot, it increased the appeal of video games with female audiences, and it was gaming's first broad licensing success. It is often cited as the first game with cutscenes (in the form of brief comical interludes about Pac-Man and Blinky chasing each other), though actually Space Invaders Part II employed a similar style of between-level intermissions in 1979.
Pac-Man was a turning point for the arcade video game industry, which had previously been dominated by space shoot 'em ups since Space Invaders (1978). Pac-Man popularized a genre of "character-led" action games, leading to a wave of character action games involving player characters in 1981, such as Nintendo's prototypical platform game Donkey Kong, Konami's Frogger and Universal Entertainment's Lady Bug. Pac-Man was one of the first popular non-shooting action games, defining key elements of the genre such as "parallel visual processing" which requires simultaneously keeping track of multiple entities, including the player's location, the enemies, and the energizers.
"Maze chase" games exploded on home computers after the release of Pac-Man. Some of them appeared before official ports and garnered more attention from consumers, and sometimes lawyers, as a result. These include Taxman (1981) and Snack Attack (1982) for the Apple II, Jawbreaker (1981) for the Atari 8-bit family, Scarfman (1981) for the TRS-80, and K.C. Munchkin! (1981) for the Odyssey². Namco themselves produced several other maze chase games, including Rally-X (1980), Dig Dug (1982), Exvania (1992), and Tinkle Pit (1994). Atari sued Philips for creating K.C. Munchkin in the case Atari, Inc. v. North American Philips Consumer Electronics Corp., leading to Munchkin being pulled from store shelves under court order. No major competitors emerged to challenge Pac-Man in the maze-chase subgenre.
Pac-Man also inspired 3D variants of the concept, such as Monster Maze (1982), Spectre (1982), and early first-person shooters such as MIDI Maze (1987; which also had similar character designs). John Romero credited Pac-Man as the game that had the biggest influence on his career; Wolfenstein 3D includes a Pac-Man level from a first-person perspective. Many post-Pac-Man titles include power-ups that briefly turn the tables on the enemy. The game's artificial intelligence inspired programmers who later worked for companies like Bethesda.
Legacy
Guinness World Records has awarded the Pac-Man series eight records in Guinness World Records: Gamer's Edition 2008, including "Most Successful Coin-Operated Game". On June 3, 2010, at the NLGD Festival of Games, the game's creator Toru Iwatani officially received the certificate from Guinness World Records for Pac-Man having had the most "coin-operated arcade machines" installed worldwide: 293,822. The record was set and recognized in 2005 and mentioned in the Guinness World Records: Gamer's Edition 2008, but finally actually awarded in 2010. In 2009, Guinness World Records listed Pac-Man as the most recognizable video game character in the United States, recognized by 94% of the population, above Mario who was recognized by 93% of the population. The Pac-Man character and game series became an icon of video game culture during the 1980s.
The game has inspired various real-life recreations, involving real people or robots. One event called Pac-Manhattan set a Guinness World Record for "Largest Pac-Man Game" in 2004.
The business term "Pac-Man defense" in mergers and acquisitions refers to a hostile takeover target that attempts to reverse the situation and instead acquire its attempted acquirer, a reference to Pac-Mans energizers. The "Pac-Man renormalization" is named for a cosmetic resemblance to the character, in the mathematical study of the Mandelbrot set. The game's popularity has also led to "Pac-Man" being adopted as a nickname, such as by boxer Manny Pacquiao and the American football player Adam Jones.
On August 21, 2016, in the 2016 Summer Olympics closing ceremony, during a video which showcases Tokyo as the host of the 2020 Summer Olympics, a small segment shows Pac-Man and the ghosts racing and eating dots on a running track.
Merchandise
A wide variety of Pac-Man merchandise have been marketed with the character's image. By 1982, Midway had about 95-105 licensees selling Pac-Man merchandise, including major companies, such as AT&T selling a Pac-Man telephone. There were more than 500 Pac-Man related products.
Pac-Man themed merchandise sales had exceeded ( adjusted for inflation) in the US by 1982. Pac-Man related merchandise products included bumber stickers, jewellery, accessories (such as a $20,000 Ms. Pac-Man choker with 14 karat gold), bicycles, breakfast cereals, popsicles, t-shirts, toys, handheld electronic game imitations, and pasta.
Television
The Pac-Man animated television series produced by Hanna–Barbera aired on ABC from 1982 to 1983. It was the highest-rated Saturday morning cartoon show in the US during late 1982.
A computer-generated animated series titled Pac-Man and the Ghostly Adventures aired on Disney XD in June 2013.
Music
The Buckner & Garcia song "Pac-Man Fever" (1981) went to No. 9 on the Billboard Hot 100 charts, and received a Gold certification for more than 1 million records sold by 1982, and a total of 2.5 million copies sold as of 2008. More than one million copies of the group's Pac-Man Fever album (1982) were sold.
In 1982, "Weird Al" Yankovic recorded a parody of "Taxman" by the Beatles as "Pac-Man". It was eventually released in 2017 as part of Squeeze Box: The Complete Works of "Weird Al" Yankovic. In 1992, Aphex Twin (with the name Power-Pill) released Pac-Man, a techno album which consists mostly of samples from the game.
On July 20, 2020, Gorillaz and ScHoolboy Q, released a track entitled "PAC-MAN" as a part of Gorillaz' Song Machine series to commemorate the game's 40th anniversary, with the music video depicting the band's frontman, 2-D, playing a Gorillaz-themed Pac-Man arcade game.
Film
The Pac-Man character appears in the film Pixels (2015), with Denis Akiyama playing series creator Toru Iwatani. Iwatani makes a cameo at the beginning of the film as an arcade technician. Pac-Man is referenced and makes an appearance in the 2017 film Guardians of the Galaxy Vol. 2. The game, the character, and the ghosts all also appear in the film Wreck-It Ralph, as well as the sequel Ralph Breaks the Internet.
In Sword Art Online The Movie: Ordinal Scale where Kirito and his friends beat a virtual reality game called PAC-Man 2024. In the Japanese tokusatsu film Kamen Rider Heisei Generations: Dr. Pac-Man vs. Ex-Aid & Ghost with Legend Riders, a Pac-Man-like character is the main villain.
The 2018 film Relaxer uses Pac-Man as a strong plot element in the story of a 1999 couch-bound man who attempts to beat the game (and encounters the famous Level 256 glitch) before the year 2000 problem occurs.
In 2008, a feature film based on the game was in development.
Other gaming media
In 1982, Milton Bradley released a board game based on Pac-Man. Players move up to four Pac-Man characters (traditional yellow plus red, green, and blue) plus two ghosts as per the throws of a pair of dice. The two ghost pieces were randomly packed with one of four colors.
Sticker manufacturer Fleer included rub-off game cards with its Pac-Man stickers. The card packages contain a Pac-Man style maze with all points along the path hidden with opaque coverings. From the starting position, the player moves around the maze while scratching off the coverings to score points.
A Pac-Man-themed downloadable content package for Minecraft was released in 2020 in commemoration of the game's 40th anniversary. This pack introduced a new ghost called 'Creepy', based on the Creeper.
Perfect scores and other records
A perfect score on the original Pac-Man arcade game is 3,333,360 points, achieved when the player obtains the maximum score on the first 255 levels by eating every dot, energizer, fruit and blue ghost without losing a man, then uses all six men to obtain the maximum possible number of points on level 256.
The first person to achieve a publicly witnessed and verified perfect score without manipulating the game's hardware to freeze play was Billy Mitchell, who performed the feat on July 3, 1999. Some recordkeeping organizations removed Mitchell's score after a 2018 investigation by Twin Galaxies concluded that two unrelated Donkey Kong score performances submitted by Mitchell had not used an unmodified original circuit board. As of July 2020, seven other gamers had achieved perfect Pac-Man scores on original arcade hardware. The world record for the fastest completion of a perfect score, according to Twin Galaxies, is currently held by David Race with a time of 3 hours, 28 minutes, 49 seconds.
In December 1982, eight-year-old boy Jeffrey R. Yee received a letter from United States president Ronald Reagan congratulating him on a world record score of 6,131,940 points, possible only if he had passed level 256. In September 1983, Walter Day, chief scorekeeper at Twin Galaxies at the time, took the U.S. National Video Game Team on a tour of the East Coast to visit gamers who claimed the ability to pass that level. None demonstrated such an ability. In 1999, Billy Mitchell offered $100,000 to anyone who could pass level 256 before January 1, 2000. The offer expired with the prize unclaimed.
After announcing in 2018 that it would no longer recognize the first perfect score on Pac-Man, Guinness World Records reversed that decision and reinstated Billy Mitchell's 1999 performance on June 18, 2020.
Remakes and sequels
Pac-Man inspired a long series of sequels, remakes, and re-imaginings, and is one of the longest-running video game franchises in history. The first of these was Ms. Pac-Man, developed by the American-based General Computer Corporation and published by Midway in 1982. The character's gender was changed to female in response to Pac-Mans popularity with women, with new mazes, moving bonus items, and faster gameplay being implemented to increase its appeal. Ms. Pac-Man is one of the best-selling arcade games in North America, where Pac-Man and Ms. Pac-Man had become the most successful machines in the history of the amusement arcade industry. Legal concerns raised over who owned the game caused Ms. Pac-Man to become owned by Namco, who assisted in production of the game. Ms. Pac-Man inspired its own line of remakes, including Ms. Pac-Man Maze Madness (2000), and Ms. Pac-Man: Quest for the Golden Maze, and is also included in many Namco and Pac-Man collections for consoles.
Namco's own follow-up to the original was Super Pac-Man, released in 1982. This was followed by the Japan-exclusive Pac & Pal in 1983. Midway produced many other Pac-Man sequels during the early 1980s, including Pac-Man Plus (1982), Jr. Pac-Man (1983), Baby Pac-Man (1983), and Professor Pac-Man (1984). Other games include the isometric Pac-Mania (1987), the side-scrollers Pac-Land (1984), Hello! Pac-Man (1994), and Pac-In-Time (1995), the 3D platformer Pac-Man World (1999), and the puzzle games Pac-Attack (1991) and Pac-Pix (2005). Iwatani designed Pac-Land and Pac-Mania, both of which remain his favorite games in the series. Pac-Man Championship Edition, published for the Xbox 360 in 2007, was Iwatani's final game before leaving the company. Its neon visuals and fast-paced gameplay was met with acclaim, leading to the creation of Pac-Man Championship Edition DX (2010) and Pac-Man Championship Edition 2 (2016).
Coleco's tabletop Mini-Arcade versions of the game yielded 1.5 million units sold in 1982. Nelsonic Industries produced a Pac-Man LCD wristwatch game with a simplified maze also in 1982.
Namco Networks sold a downloadable Windows PC version of Pac-Man in 2009 which also includes an enhanced mode which replaces all of the original sprites with the sprites from Pac-Man Championship Edition. Namco Networks made a downloadable bundle which includes its PC version of Pac-Man and its port of Dig Dug called Namco All-Stars: Pac-Man and Dig Dug. In 2010, Namco Bandai announced the release of the game on Windows Phone 7 as an Xbox Live game.
For the weekend of May 21–23, 2010, Google changed the logo on its homepage to a playable version of the game in recognition of the 30th anniversary of the game's release. The Google Doodle version of Pac-Man was estimated to have been played by more than 1 billion people worldwide in 2010, so Google later gave the game its own page.
In April 2011, Soap Creative published World's Biggest Pac-Man, working together with Microsoft and Namco-Bandai to celebrate Pac-Mans 30th anniversary. It is a multiplayer browser-based game with user-created, interlocking mazes.
For April Fools' Day in 2017, Google created a playable of the game on Google Maps where users were able to play the game using the map onscreen.
Notes
References
Further reading
Comprehensive coverage on the history of the entire series up through 1999.
Morris, Chris (May 10, 2005). "Pac Man Turns 25". CNN Money.
Vargas, Jose Antonio (June 22, 2005). "Still Love at First Bite: At 25, Pac-Man Remains a Hot Pursuit". The Washington Post.
Hirschfeld, Tom. How to Master the Video Games, Bantam Books, 1981. Strategy guide for a variety of arcade games including Pac-Man. Includes drawings of some of the common patterns.
External links
Pac-Man highscores on Twin Galaxies
Pac-Man on Arcade History
1980 video games
Android (operating system) games
Arcade video games
Pac-Man arcade games
Atari 5200 games
Atari 8-bit family games
ColecoVision games
Commodore 64 games
Commodore VIC-20 games
Famicom Disk System games
FM-7 games
Game Boy Advance games
Game Boy games
Game Gear games
Video games about ghosts
Intellivision games
IOS games
IPod games
MacOS games
Maze games
Midway video games
Mobile games
MSX games
Namco arcade games
NEC PC-6001 games
NEC PC-8001 games
NEC PC-8801 games
NEC PC-9801 games
Neo Geo Pocket Color games
Nintendo Entertainment System games
SAM Coupé games
Sharp MZ games
Sharp X1 games
Sharp X68000 games
Tengen (company) games
Tiger handheld games
U.S. Gold games
Vertically-oriented video games
Video game franchises
Video games about food and drink
Video games developed in Japan
Virtual Console games
Windows Phone games
Xbox 360 Live Arcade games
ZX Spectrum games
Z80
Articles containing video clips
1980s fads and trends
Video game franchises introduced in 1980
Video games adapted into television shows |
81763 | https://en.wikipedia.org/wiki/Antilochus%20of%20Pylos | Antilochus of Pylos | In Greek mythology, Antilochus (; Ancient Greek: Ἀντίλοχος Antílokhos) was a prince of Pylos and one of the Achaeans in the Trojan War.
Family
Antilochus was the son of King Nestor either by Anaxibia or Eurydice. He was the brother to Thrasymedes, Pisidice, Polycaste, Perseus, Stratichus, Aretus, Echephron and Pisistratus.
Mythology
One of the suitors of Helen, Antilochus accompanied his father and his brother Thrasymedes to the Trojan War. He was distinguished for his beauty, swiftness of foot, and skill as a charioteer. Though the youngest among the Greek princes, he commanded the Pylians in the war and performed many deeds of valour. He was a favorite of the gods and a friend of Achilles, to whom he was commissioned to announce the death of Patroclus.
When his father Nestor was attacked by Memnon, Antilochus sacrificed himself to save him, thus fulfilling an oracle which had warned to "beware of an Ethiopian." Antilochus' death was avenged by Achilles, who drove the Trojans back to the gates, where he is killed by Paris. In later accounts, he was slain by Hector or by Paris in the temple of the Thymbraean Apollo together with Achilles His ashes, along with those of Achilles and Patroclus, were enshrined in a mound on the promontory of Sigeion, where the inhabitants of Ilium offered sacrifice to the dead heroes. In the Odyssey, the three friends are represented as united in the underworld and walking together in the Asphodel Meadows. According to Pausanias, they dwell together on the island of Leuke.
Among the Trojans he killed were Melanippus, Ablerus, Atymnius, Phalces, Echepolos, and Thoon, although Hyginus records that he only killed two Trojans. At the funeral games of Patroclus, Antilochus finished second in the chariot race and third in the foot race.
Antilochus left behind in Messenia a son Paeon, whose descendants were among the Neleidae expelled from Messenia, by the descendants of Heracles.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Dares Phrygius, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at theio.com
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. . Online version at the Perseus Digital Library. Greek text available from the same website.
Pausanias, Description of Greece. W. H. S. Jones (translator). Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London, William Heinemann Ltd. (1918). Vol. 1. Books I–II: .
Pindar, Odes translated by Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library.
Pindar, The Odes of Pindar including the Principal Fragments with an Introduction and an English Translation by Sir John Sandys, Litt.D., FBA. Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1937. Greek text available at the Perseus Digital Library.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Strabo, Geographica edited by A. Meineke. Leipzig: Teubner. 1877. Greek text available at the Perseus Digital Library.
Suitors of Helen
Achaean Leaders
Pylian characters in Greek mythology
Children of Nestor (mythology) |
43998499 | https://en.wikipedia.org/wiki/Open%20Semantic%20Framework | Open Semantic Framework | The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components developed specifically to provide a complete Web application framework. OSF is made available under the Apache 2 license.
OSF is a platform-independent Web services framework for accessing and exposing structured data, semi-structured data, and unstructured data using ontologies to reconcile semantic heterogeneities within the contributing data and schema. Internal to OSF, all data is converted to RDF to provide a common data model. The OWL 2 ontology language is used to describe the data schema overlaying all of the constituent data sources.
The architecture of OSF is built around a central layer of RESTful web services, designed to enable most constituent modules within the software stack to be substituted without major adverse impacts on the entire stack. A central organizing perspective of OSF is that of the dataset. These datasets contain the records in any given OSF instance. One or more domain ontologies is used by a given OSF instance to define the structural relationships amongst the data and their attributes and concepts.
Some of the use applications for OSF include local government, health information systems, community indicator systems, eLearning, citizen engagement, or any domain that may be modeled by ontologies.
Documentation and training videos are provided with the open-source OSF application.
History
Early components of OSF were provided under the names of structWSF and conStruct starting in June 2009. The first version 1.x of OSF was announced in August 2010. The first automated OSF installer was released in March 2012. OSF was expanded with an ontology manager, structOntology in August 2012. The version 2.x developments of OSF occurred for enterprise sponsors in the period of early 2012 until the end of 2013. None of these interim 2.x versions were released to the public. Then, at the conclusion of this period, Structured Dynamics, the main developer of OSF, refactored these specific enterprise developments to leapfrog to a new version 3.0 of OSF, announced in early 2014. These public releases were last updated to OSF version 3.4.0 in August 2016.
Architecture and technologies
The Open Semantic Framework has a basic three-layer architecture. User interactions and content management are provided by an external content management system, which is currently Drupal (but does not depend on it). This layer accesses the pivotal OSF Web Services; there are now more than 20 providing OSF's distributed computing functionality. Full CRUD access and user permissions and security is provided to all digital objects in the stack. This middleware layer then provides a means to access the third layer, the engines and indexers that drive the entire stack. Both the top CMS layer and the engines layer are provided by existing off-the-shelf software. What makes OSF a complete stack are the connecting scripts and the intermediate Web services layer.
The premise of the OSF stack is based on the RDF data model. RDF provides the means for integrating existing structured data assets in any format, with semi-structured data like XML and HTML, and unstructured documents or text. The OSF framework is made operational via ontologies that capture the domain or knowledge space, matched with internal ontologies that guide OSF operations and data display. This design approach is known as ODapps, for ontology-driven applications.
Content management layer
OSF delegates all direct user interactions and standard content management to an external CMS. In the case of Drupal, this integration is tighter, and supports connectors and modules that can replace standard Drupal storage and databases with OSF triplestores.
Web services layer
This intermediate OSF Web Services layer may also be accessed directly via API or command line or utilities like cURL, suitable for interfacing with standard content management systems (CMSs), or via a dedicated suite of connectors and modules that leverage the open source Drupal CMS. These connectors and modules, also part of the standard OSF stack and called OSF for Drupal, natively enable Drupal's existing thousands of modules and ecosystem of developers and capabilities to access OSF using familiar Drupal methods.
The OSF middleware framework is generally RESTful in design and is based on HTTP and Web protocols and W3C open standards. The initial OSF framework comes packaged with a baseline set of more than 20 Web services in CRUD, browse, search, tagging, ontology management, and export and import. All Web services are exposed via APIs and SPARQL endpoints. Each request to an individual Web service returns an HTTP status and optionally a document of resultsets. Each results document can be serialized in many ways, and may be expressed as either RDF, pure XML, JSON, or other formats.
Engines layer
The engines layer represents the major workflow requirements and data management and indexing of the system. The premise of the Open Semantic Framework is based on the RDF data model. Using a common data model means that all Web services and actions against the data only need to be programmed via a single, canonical form. Simple converters convert external, native data formats to the RDF form at time of ingest; similar converters can translate the internal RDF form back into native forms for export (or use by external applications). This use of a canonical form leads to a simpler design at the core of the stack and a uniform basis to which tools or other work activities can be written.
The OSF engines are all open source and work to support this premise. The OSF engines layer governs the index and management of all OSF content. Documents are indexed by the Solr engine for full-text search, while information about their structural characteristics and metadata are stored in an RDF triplestore database provided by OpenLink's Virtuoso software. The schema aspects of the information (the "ontologies") are separately managed and manipulated with their own W3C standard application, the OWL API. At ingest time, the system automatically routes and indexes the content into its appropriate stores. Another engine, GATE (General Architecture for Text Engineering), provides semi-automatic assistance in tagging input information and other natural language processing (NLP) tasks.
Alternatives
OSF is sometimes referred to as a linked data application. Alternative applications in this space include:
Callimachus
CubicWeb
LOD2 Stack
Apache Marmotta
The Open Semantic Framework also has alternatives in the semantic publishing and semantic computing arenas.
See also
Data integration
Data management
Drupal
Enterprise information integration
Knowledge organization
Linked data
Middleware
Ontology-based data integration
Resource Description Framework
Resource-oriented architecture
Semantic computing
Semantic integration
Semantic publishing
Semantic search
Semantic service-oriented architecture
Semantic technology
Software framework
Web Ontology Language
References
External links
Official website
Drupal
GATE
Open Semantic Framework code repository at GitHub
OSF interest group
OWL API
Virtuoso
Further information
Technical documentation library at the
Video training series at the
Free content management systems
Free software programmed in PHP
Knowledge management
Ontology (information science)
Semantic Web
Web frameworks |
149708 | https://en.wikipedia.org/wiki/IBM%207030%20Stretch | IBM 7030 Stretch | The IBM 7030, also known as Stretch, was IBM's first transistorized supercomputer. It was the fastest computer in the world from 1961 until the first CDC 6600 became operational in 1964.
Originally designed to meet a requirement formulated by Edward Teller at Lawrence Livermore National Laboratory, the first example was delivered to Los Alamos National Laboratory in 1961, and a second customized version, the IBM 7950 Harvest, to the National Security Agency in 1962. The Stretch at the Atomic Weapons Research Establishment at Aldermaston, England was heavily used by researchers there and at AERE Harwell, but only after the development of the S2 Fortran Compiler which was the first to add dynamic arrays, and which was later ported to the Ferranti Atlas of Atlas Computer Laboratory at Chilton.
The 7030 was much slower than expected and failed to meet its aggressive performance goals. IBM was forced to drop its price from $13.5 million to only $7.78 million and withdrew the 7030 from sales to customers beyond those having already negotiated contracts. PC World magazine named Stretch one of the biggest project management failures in IT history.
Within IBM, being eclipsed by the smaller Control Data Corporation seemed hard to accept. The project lead, Stephen W. Dunwell, was initially made a scapegoat for his role in the "failure", but as the success of the IBM System/360 became obvious, he was given an official apology and, in 1966 was made an IBM Fellow.
In spite of Stretch's failure to meet its own performance goals, it served as the basis for many of the design features of the successful IBM System/360, which shipped in 1964.
Development history
In early 1955, Dr. Edward Teller of the University of California Radiation Laboratory wanted a new scientific computing system for three-dimensional hydrodynamic calculations. Proposals were requested from IBM and UNIVAC for this new system, to be called Livermore Automatic Reaction Calculator or LARC. According to IBM executive Cuthbert Hurd, such a system would cost roughly $2.5 million and would run at one to two MIPS. Delivery was to be two to three years after the contract was signed.
At IBM, a small team at Poughkeepsie including John Griffith and Gene Amdahl worked on the design proposal. Just after they finished and were about to present the proposal, Ralph Palmer stopped them and said, "It's a mistake." The proposed design would have been built with either point-contact transistors or surface-barrier transistors, both likely to be soon outperformed by the then newly invented diffusion transistor.
IBM returned to Livermore and stated that they were withdrawing from the contract, and instead proposed a dramatically better system, "We are not going to build that machine for you; we want to build something better! We do not know precisely what it will take but we think it will be another million dollars and another year, and we do not know how fast it will run but we would like to shoot for ten million instructions per second." Livermore was not impressed, and in May 1955 they announced that UNIVAC had won the LARC contract, now called the Livermore Automatic Research Computer. LARC would eventually be delivered in June 1960.
In September 1955, fearing that Los Alamos National Laboratory might also order a LARC, IBM submitted a preliminary proposal for a high-performance binary computer based on the improved version of the design that Livermore had rejected, which they received with interest. In January 1956, Project Stretch was formally initiated. In November 1956, IBM won the contract with the aggressive performance goal of a "speed at least 100 times the IBM 704" (i.e. 4 MIPS). Delivery was slated for 1960.
During design, it proved necessary to reduce the clock speeds, making it clear that Stretch could not meet its aggressive performance goals, but estimates of performance ranged from 60 to 100 times the IBM 704. In 1960, the price of $13.5 million was set for the IBM 7030. In 1961, actual benchmarks indicated that the performance of the IBM 7030 was only about 30 times the IBM 704 (i.e. 1.2 MIPS), causing considerable embarrassment for IBM. In May 1961, Tom Watson announced a price cut of all 7030s under negotiation to $7.78 million and immediate withdrawal of the product from further sales.
Its floating-point addition time is 1.38–1.50 microseconds, multiplication time is 2.48–2.70 microseconds, and division time is 9.00–9.90 microseconds.
Technical impact
While the IBM 7030 was not considered successful, it spawned many technologies incorporated in future machines that were highly successful. The Standard Modular System transistor logic was the basis for the IBM 7090 line of scientific computers, the IBM 7070 and 7080 business computers, the IBM 7040 and IBM 1400 lines, and the IBM 1620 small scientific computer; the 7030 used about transistors. The IBM 7302 Model I Core Storage units were also used in the IBM 7090, IBM 7070 and IBM 7080. Multiprogramming, memory protection, generalized interrupts, the eight-bit byte for I/O
were all concepts later incorporated in the IBM System/360 line of computers as well as most later central processing units (CPU).
Stephen Dunwell, the project manager who became a scapegoat when Stretch failed commercially, pointed out soon after the phenomenally successful 1964 launch of System/360 that most of its core concepts were pioneered by Stretch. By 1966 he had received an apology and been made an IBM Fellow, a high honor that carried with it resources and authority to pursue one's desired research.
Instruction pipelining, prefetch and decoding, and memory interleaving were used in later supercomputer designs such as the IBM System/360 Models 91, 95 and 195, and the IBM 3090 series as well as computers from other manufacturers. , these techniques are still used in most advanced microprocessors, starting with the 1990s generation that included the Intel Pentium and the Motorola/IBM PowerPC, as well as in many embedded microprocessors and microcontrollers from various manufacturers.
Hardware implementation
The 7030 CPU uses emitter-coupled logic (originally called current-steering logic) on 18 types of Standard Modular System (SMS) cards. It uses 4,025 double cards (as shown) and 18,747 single cards, holding 169,100 transistors, requiring a total of 21 kW power. It uses high-speed NPN and PNP germanium drift transistors, with cut-off frequency over 100 MHz, and using ~50 mW each. Some third level circuits use a 3rd voltage level. Each logic level has a delay of about 20 ns. To gain speed in critical areas emitter-follower logic is used to reduce the delay to about 10 ns.
It uses the same core memory as the IBM 7090.
Installations
Los Alamos Scientific Laboratory (LASL) in April 1961, accepted in May 1961, and used until June 21, 1971.
Lawrence Livermore National Laboratory, Livermore, California delivered November 1961.
U.S. National Security Agency in February 1962 as the main CPU of the IBM 7950 Harvest system, used until 1976, when the IBM 7955 Tractor tape system developed problems due to worn cams that could not be replaced.
Atomic Weapons Establishment, Aldermaston, England, delivered February 1962
U.S. Weather Bureau Washington D.C., delivered June/July 1962.
MITRE Corporation, delivered December 1962. and used until August 1971. In the spring of 1972, it was sold to Brigham Young University, where it was used by the physics department until scrapped in 1982.
U.S. Navy Dahlgren Naval Proving Ground, delivered Sep/Oct 1962.
Commissariat à l'énergie atomique, France, delivered November 1963.
IBM.
The Lawrence Livermore Laboratory's IBM 7030 (except for its core memory) and portions of the MITRE Corporation/Brigham Young University IBM 7030 now reside in the Computer History Museum collection, in Mountain View, California.
Architecture
Data formats
Fixed-point numbers are variable in length, stored in either binary (1 to 64 bits) or decimal (1 to 16 digits) and either unsigned format or sign/magnitude format. In decimal format, digits are variable length bytes (4 to 8 bits).
Floating point numbers have a 1-bit exponent flag, a 10-bit exponent, a 1-bit exponent sign, a 48-bit magnitude, and a 4-bit sign byte in sign/magnitude format.
Alphanumeric characters are variable length and can use any character code of 8 bits or less.
Bytes are variable length (1 to 8 bits).
Instruction format
Instructions are either 32-bit or 64-bit.
Registers
The registers overlay the first 32 addresses of memory as shown.
The accumulator and index registers operate in sign-and-magnitude format.
Memory
Main memory is 16K to 256K 64-bit binary words, in banks of 16K.
The memory was immersion oil-heated/cooled to stabilize its operating characteristics.
Software
STRETCH Assembly Program (STRAP)
MCP (not to be confused with the Burroughs MCP)
COLASL and IVY programming languages
FORTRAN programming language
See also
IBM 608, the first commercially available transistorized computing device
ILLIAC II, a transistorized super computer from The University of Illinois that competed with Stretch.
Notes
References
Further reading
External links
Oral history interview with Gene Amdahl Charles Babbage Institute, University of Minnesota, Minneapolis. Amdahl discusses his role in the design of several computers for IBM including the STRETCH, IBM 701, 701A, and IBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process for computers.
IBM Stretch Collections @ Computer History Museum
Collection index page
The IBM 7030 FORTRAN System
7030 Data Processing System (IBM Archives)
IBM Stretch (aka IBM 7030 Data Processing System)
Organization Sketch of IBM Stretch
BRL report on the IBM Stretch
Planning a Computer System – Project Stretch, 1962 book.
Scan of copy autographed by several of the contributors
Searchable PDF file
IBM 7030 documents at Bitsavers.org (PDF files)
7030
7 7030
7030
Computer-related introductions in 1961
64-bit computers |
593708 | https://en.wikipedia.org/wiki/Subpixel%20rendering | Subpixel rendering | Subpixel rendering is a way to increase the apparent resolution of a computer's liquid crystal display (LCD) or organic light-emitting diode (OLED) display by rendering pixels to take into account the screen type's physical properties. It takes advantage of the fact that each pixel on a color LCD is actually composed of individual red, green, and blue or other color subpixels to anti-alias text with greater detail or to increase the resolution of all image types on layouts which are specifically designed to be compatible with subpixel rendering.
Background
A single pixel on a color subpixelated display is made of several color primaries, typically three colored elements—ordered (on various displays) either as blue, green, and red (), or as red, green, and blue (). Some displays have more than three primaries, often called MultiPrimary, such as the combination of red, green, blue, and yellow (), or red, green, blue and white (W), or even red, green, blue, yellow, and cyan ().
These pixel components, sometimes called subpixels, appear as a single color to the human eye because of blurring by the optics and spatial integration by nerve cells in the eye. The components are easily visible, however, when viewed with a small magnifying glass, such as a loupe. Over a certain resolution threshold the colors in the subpixels are not visible, but the relative intensity of the components shifts the apparent position or orientation of a line.
Subpixel rendering is better suited to some display technologies than others. The technology is well-suited to LCDs and other technologies where each logical pixel corresponds directly to three or more independent colored subpixels, but less so for CRTs.
In a CRT the light from the pixel components often spreads across pixels, and the outputs of adjacent pixels are not perfectly independent. If a designer knew precisely about the display's electron beams and aperture grille, subpixel rendering might have some advantage but the properties of the CRT components, coupled with the alignment variations that are part of the production process, make subpixel rendering less effective for these displays.
The technique should have good application to organic light emitting diodes (OLED) and other display technologies that organize pixels the same way as LCDs.
History and patents
The origin of subpixel rendering as used today remains controversial. Apple, then IBM, and finally Microsoft patented various implementations with certain technical differences owing to the different purposes their technologies were intended for.
Microsoft has several patents in the United States on subpixel rendering technology for text rendering on RGB Stripe layouts. The patents 6,219,025, 6,239,783, 6,307,566, 6,225,973, 6,243,070, 6,393,145, 6,421,054, 6,282,327, 6,624,828 were filed between October 7, 1998, and October 7, 1999, thus should expire on October 7, 2019. Analysis by FreeType of the patent indicates that the idea of subpixel rendering is not covered by the patent, but the actual filter used as a last step to balance the color is. Microsoft's patent describes the smallest filter possible that distributes each subpixel value to an equal amount of R,G, and B pixels. Any other filter will either be blurrier or will introduce color artifacts.
Apple was able to use it in Mac OS X due to a patent cross-licensing agreement.
Apple II
It is sometimes claimed (such as by Steve Gibson) that the Apple II, introduced in 1977, supports an early form of subpixel rendering in its high-resolution (280×192) graphics mode. However, the method Gibson describes can also be viewed as a limitation of the way the machine generates color, rather than as a technique intentionally exploited by programmers to increase resolution.
David Turner of the FreeType project criticized Gibson's theory as to the invention, at least as far as patent law is concerned, in the following way: «For the record, the Wozniak patent is explicitly referenced in the [Microsoft ], and the claims are worded precisely to avoid colliding with it (which is easy, since the Apple II only used 2 "sub-pixels", instead of the 'at minimum 3' claimed by MS).» Turner further explains his view:
The bytes that comprise the Apple II high-resolution screen buffer contain seven visible bits (each corresponding directly to a pixel) and a flag bit used to select between purple/green or blue/orange color sets. Each pixel, since it is represented by a single bit, is either on or off; there are no bits within the pixel itself for specifying color or brightness. Color is instead created as an artifact of the NTSC color encoding scheme, determined by horizontal position: pixels with even horizontal coordinates are always purple (or blue, if the flag bit is set), and odd pixels are always green (or orange). Two lit pixels next to each other are always white, regardless of whether the pair is even/odd or odd/even, and irrespective of the value of the flag bit. The foregoing is only an approximation of the true interplay of the digital and analog behavior of the Apple's video output circuits on one hand, and the properties of real NTSC monitors on the other hand. However, this approximation is what most programmers of the time would have in mind while working with the Apple's high-resolution mode.
Gibson's example claims that because two adjacent bits make a white block, there are in fact two bits per pixel: one which activates the purple left half of the pixel, and the other which activates the green right half of the pixel. If the programmer instead activates the green right half of a pixel and the purple left half of the next pixel, then the result is a white block that is 1/2 pixel to the right, which is indeed an instance of subpixel rendering. However, it is not clear whether any programmers of the Apple II have considered the pairs of bits as pixels—instead calling each bit a pixel. While the quote from Apple II inventor Steve Wozniak on Gibson's page seems to imply that vintage Apple II graphics programmers routinely used subpixel rendering, it is difficult to make a case that many of them thought of what they were doing in such terms.
The flag bit in each byte affects color by shifting pixels half a pixel-width to the right. This half-pixel shift was exploited by some graphics software, such as HRCG (High-Resolution Character Generator), an Apple utility that displayed text using the high-resolution graphics mode, to smooth diagonals. (Many Apple II users had monochrome displays, or turned down the saturation on their color displays when running software that expected a monochrome display, so this technique was useful.) Although it did not provide a way to address subpixels individually, it did allow positioning of pixels at fractional pixel locations and can thus be considered a form of subpixel rendering. However, this technique is not related to LCD subpixel rendering as described in this article.
IBM
IBM's U.S. Patent #5341153 — Filed: 1988-06-13, "Method of and apparatus for displaying a multicolor image" may cover some of these techniques.
ClearType
Microsoft announced its subpixel rendering technology, called ClearType, at COMDEX in 1998. Microsoft published a paper in May of 2000, Displaced Filtering for Patterned Displays describing the filtering behind ClearType. It was then made available in Windows XP, but it was not activated by default until Windows Vista. (Windows XP OEMs however could and did change the default setting.)
FreeType
FreeType, the library used by most current software on the X Window System, contains two open source implementations. The original implementation uses the ClearType antialiasing filters and it carries the following notice: "The colour filtering algorithm of Microsoft's ClearType technology for subpixel rendering is covered by patents; for this reason the corresponding code in FreeType is disabled by default. Note that subpixel rendering per se is prior art; using a different colour filter thus easily circumvents Microsoft's patent claims."
FreeType offers a variety of color filters. Since version 2.6.2, the default filter is light, a filter that is both normalized (value sums up to 1) and color-balanced (eliminate color fringes at the cost of resolution).
Since version 2.8.1, a second implementation exists, called Harmony, that "offers high quality LCD-optimized output without resorting to ClearType techniques of resolution tripling and filtering". This is the method enabled by default. When using this method, "each color channel is generated separately after shifting the glyph outline, capitalizing on the fact that the color grids on LCD panels are shifted by a third of a pixel. This output is indistinguishable from ClearType with a light 3-tap filter." Since the Harmony method does not require additional filtering, it is not covered by the ClearType patents.
SubLCD
SubLCD is another open source subpixel rendering method that claims it does not infringe existing patents, and promises to remain unpatented. It uses a "2-pixel" subpixel rendering, where G is one subpixel, and the R and B of two adjacent pixels are combined into a "purple subpixel", to avoid the Microsoft patent. This also has the claimed advantage of a more equal perceived brightness of the two subpixels, somewhat easier power-of-2 math, and a sharper filter. However it only produces 2/3 the resulting resolution.
David Turner was however skeptical of SubLCD's author's claims: "Unfortunately, I, as the FreeType author, do not share his enthusiasm. The reason is precisely the very vague patent claims [by Microsoft] described previously. To me, there is a non-negligible (even if small) chance, that these claims also cover the SubLCD technique. The situation would probably be different if we could invalidate the broader patent claims, but this is not the case currently."
CoolType
Adobe built their own subpixel renderer called CoolType, so they could display documents the same way across various operating systems: Windows, MacOS, Linux etc. When it was launched around the year 2001, CoolType supported a wider range of fonts than Microsoft's ClearType, which was then limited to TrueType fonts, whereas Adobe's CoolType also supported PostScript fonts (and their OpenType equivalent as well).
OS X
Mac OS X used to use subpixel rendering as well, as part of Quartz 2D, however it was removed after the introduction of Retina displays. Unlike Microsoft's implementation, which favors a tight fit to the grid (font hinting) to maximize legibility, Apple's implementation prioritizes the shape of the glyphs as set out by their designer.
PenTile
Starting in 1992, Candice H. Brown Elliott researched subpixel rendering and novel layouts, the PenTile matrix family pixel layout, which worked together with sub pixel rendering algorithms to raise the resolution of color flat-panel displays. In 2000, she co-founded Clairvoyante, Inc. to commercialize these layouts and subpixel rendering algorithms. In 2008, Samsung purchased Clairvoyante and simultaneously funded a new company, Nouvoyance, Inc., retaining much of the technical staff, with Ms. Brown Elliott as CEO.
With subpixel rendering technology, the number of points that may be independently addressed to reconstruct the image is increased. When the green subpixels are reconstructing the shoulders, the red subpixels are reconstructing near the peaks and vice versa. For text fonts, increasing the addressability allows the font designer to use spatial frequencies and phases that would have created noticeable distortions had it been whole pixel rendered. The improvement is most noted on italic fonts which exhibit different phases on each row. This reduction in moiré distortion is the primary benefit of subpixel rendered fonts on the conventional Stripe panel.
Although subpixel rendering increases the number of reconstruction points on the display this does not always mean that higher resolution, higher spatial frequencies, more lines and spaces, may be displayed on a given arrangement of color subpixels. A phenomenon occurs as the spatial frequency is increased past the whole pixel Nyquist limit from the Nyquist–Shannon sampling theorem; Chromatic aliasing (color fringes) may appear with higher spatial frequencies in a given orientation on the color subpixel arrangement.
Example with the common stripes layout
For example, consider an Stripe Panel:
WWWWWWWWWWWWWWWWWW = red
is WWWWWWWWWWWWWWWWWW = green
perceived WWWWWWWWWWWWWWWWWW where = blue
as WWWWWWWWWWWWWWWWWW W = white
WWWWWWWWWWWWWWWWWW
Shown below is an example of black and white lines at the Nyquist limit, but at a slanting angle, taking advantage of subpixel rendering to use a different phase each row:
_ WWW__WWW___ = red
_ is _WWW__WWW__ = green
_ perceived _WWW___WWW_ where = blue
_ as __WWW___WWW _ = black
__ WWW__WW W = white
Shown below is an example of chromatic aliasing when the traditional whole pixel Nyquist limit is exceeded:
= red = yellow
is = green = cyan
perceived where = blue = magenta
as _ = black
This case shows the result of attempting to place vertical black and white lines at four subpixels per cycle on the Stripe architecture. One can visually see that the lines, instead of being white, are colored. Starting from the left, the first line is red combined with green to produce a yellow-colored line. The second line is green combined with blue to produce a pastel cyan-colored line. The third line is blue combined with red to produce a magenta-colored line. The colors then repeat: yellow, cyan, and magenta. This demonstrates that a spatial frequency of one cycle per four subpixels is too high. Attempts to go to a yet higher spatial frequency, such as one cycle per three subpixels, would result in a single solid color.
Some LCDs compensate the inter-pixel color mix effect by having borders between pixels slightly larger than borders between subpixels. Then, in the example above, a viewer of such an LCD would see a blue line appearing adjacent to a red line instead of a single magenta line.
Example with - alternated stripes layout
Novel subpixel layouts have been developed to allow higher real resolution without chromatic aliasing. Shown here is one of the member of the PenTile matrix family of layouts. Shown below is an example of how a simple change to the arrangement of color subpixels may allow a higher limit in the horizontal direction:
In this case, the red and green order are interchanged every row to create a red & green checkerboard pattern with blue stripes. Note that the vertical subpixels could be split in half vertically to double the vertical resolution as well : the current LCD panels already typically use two color LEDs (aligned vertically and displaying the same lightness, see the zoomed images below) to illuminate each vertical subpixel. This layout is one of the PenTile matrix family of layouts. When displaying the same number of black-white lines, the blue subpixels are set at half brightness "":
__
__
__
__
__
__
Notice that every column that turns on comprises red and green subpixels at full brightness and blue subpixels at half value to balance it to white. Now, one may display black and white lines at up to one cycle per three subpixels without chromatic aliasing, twice that of the Stripe architecture.
Non-striped variants of the - alternated layout
Variants of the previous layout have been proposed by Clairvoyante/Nouvoyance (and demonstrated by Samsung) as members of the PenTile matrix family of layouts specifically designed for subpixel rendering efficiency.
For example, taking advantage of the doubled visible horizontal resolution, one could double the vertical resolution to make the definition more isotropic. However this would reduce the aperture of pixels, producing lower contrasts. A better alternative uses the fact that the blue subpixels are those that contribute the least to the visible intensity, so that they are less precisely located by the eye. Blue subpixels are then rendered just as a diamond in the center of a pixel square, and the rest of the pixel surface is split in four parts as a checker board of red and green subpixels with smaller sizes. Rendering images with this variant can use the same technique as before, except that now there's a near-isotropic geometry that supports both the horizontal and the vertical with the same geometric properties, making the layout ideal for displaying the same image details when the LCD panel can be rotated.
The doubled vertical and horizontal visual resolution allows to reduce the subpixel density of about 33%, in order to increase their aperture also of about 33%, with the same separation distance between subpixels (for their electronic interconnection), and also to reduce the power dissipation of about 50% with a white/black contrast increased of about 50% and still a visual-pixel resolution enhanced by about 33% (i.e. about 125 dpi instead of 96 dpi), but with only half the total number of subpixels for the same displayed surface.
Checkered -W layout
Another variant, called the W Quad, uses a checkerboard with 4 subpixels per pixel, adding a white subpixel, or more specifically, replacing one of the green subpixels of Bayer filter Pattern with a white subpixel, to increase the contrast and reduce the energy needed to illuminate white pixels (because color filters in classic striped panels absorb more than 65% of the total white light used to illuminate the panel). As each subpixel is a square instead of a thin rectangle, this also increases the aperture with the same average subpixel density, and same pixel density along both axis. As the horizontal density is reduced and the vertical density remains identical (for the same square pixel density), it becomes possible to increase the pixel density of about 33%, while maintaining the contrast comparable to classic or panels, taking profit of the more efficient use of light and lowered absorption levels by the color filters.
It is not possible to use subpixel rendering to increase the resolution without creating color fringes similar to those seen in classic or striped panels, but the increased resolution compensates it, in addition, their effective visible color is reduced by the presence of "color-neutral" white subpixels.
However, this layout allows a better rendering of greys, at the price of a lower color separation. But this is consistent with human vision and with modern image and video compression formats (like JPEG and MPEG) used in modern HDTV transmissions and in Blu-ray Discs.
Yet another variant, a member for the PenTile matrix family of subpixel layouts, alternates between subpixel order W / W every other row, to allow subpixel rendering to increase the resolution, without chromatic aliasing. As before, the increased transmittance using the white subpixel allows higher subpixel density, but in this case, the displayed resolution is even higher due to the benefits of subpixel rendering:
WWW
WWW
WWW
WWW
___
_W__W__
___
_W__W__
Visual resolution versus pixel resolution and software compatibility
Thus, not all layouts are created equal. Each particular layout may have a different "visual resolution", modulation transfer function limit (MTFL), defined as the highest number of black and white lines that may be simultaneously rendered without visible chromatic aliasing.
However, such alternate layouts are still not compatible with subpixel rendering font algorithms used in Windows, Mac OS X and Linux, which currently support only the or horizontal striped subpixel layouts (rotated monitor subpixel rendering is not supported on Windows or Mac OS X, but Linux does for most desktop environments). However, the PenTile matrix displays have a built-in subpixel rendering engine that allows conventional data sets to be converted to the layouts, providing plug'n'play compatibility with conventional layout displays. New display models should be proposed in the future that will allow monitor drivers to specify their visual resolution separately from the full pixel resolution and the relative position offsets of visible subpixels for each color plane, as well as their respective contribution to white intensity. Such monitor drivers would allow renderers to correctly adjust their geometry transform matrices in order to correctly compute the values of each color plane, and take the best profit of subpixel rendering with the lowest chromatic aliasing.
Examples
Photos were taken with a Canon PowerShot A470 digital camera using "Super Macro" mode and 4.0× digital zoom. The screen used was that integrated into a Lenovo G550 laptop. Note that the display has RGB pixels. Displays exist in all four patterns horizontal RGB/BGR and vertical RGB/BGR but horizontal RGB is the most common. In addition, several color subpixel patterns have been developed specifically to take advantage of subpixel rendering. The best known of these is the PenTile matrix family of patterns.
The composite photographs below show three methods of fonts rendering for comparison. From top: Monochrome; Traditional (whole pixel) spatial anti-aliasing; Subpixel rendering.
See also
CoolType
Font rasterization
Kell factor
Sub-pixel resolution
References
External links
Former IBM researcher Ron Feigenblatt's remarks on Microsoft ClearType
John Daggett's Subpixel Explorer—requires Firefox to display properly
Texts Rasterization Exposures Article from the Anti-Grain Geometry Project.
http://jankautz.com/publications/SubpixelCGF13.pdf
http://www.cahk.hk/innovationforum/subpixel_rendering.pdf
Computer graphics
Vector graphics
cs:Subpixel
nl:Subpixel
sk:Subpixel |
21803071 | https://en.wikipedia.org/wiki/GrafX2 | GrafX2 | GrafX2 is a bitmap graphics editor inspired by the Amiga programs Deluxe Paint and Brilliance. It is free software and distributed under the GPL-2.0-or-later license.
History
GrafX2 was an MS-DOS program developed by Sunset Design from 1996 to 2001
. It was distributed as freeware, and was one of the most used graphics editor in the demoscene. The development stopped due the lack of time of the developers. So they released the sourcecode under the GPL-2.0-or-later license.
A Windows port was done by the Eclipse demogroup and presented at the State of the art party in 2004, but the sourcecode for this version does not seem to be available anywhere.
In 2007, a project was started to port the sourcecode from the original MS-DOS version to the Simple DirectMedia Layer library. The goal was to provide a pixel art editing tool for Linux, but SDL also allowed easy ports to many other platforms, including Windows. The project development continued on this new version to add the features missing from the original opensource release and the first Windows port.
Features and specificities
What made GrafX2 interesting when it was released in 1996 was the ability to display pictures in most of the resolutions available on Amiga. This allowed the use of the program as a picture viewer for PC users. This was done by low level programming of the video card, using X-Modes combined with VESA settings. The SDL port generally runs on platforms which use high resolution screens, so it can use software scaling to emulate low resolutions. The scaling options include several non-square pixels, this allows editing of pictures for displaying on old 16- or 8-bit microcomputers, which have such video modes.
All the versions of the program are designed for drawing in indexed color mode, up to 256 colors. A palette editor allows very precise operations on the image and its palette. These functions are precious for console or mobile game graphics where some specific color indices in the palette are required for special effects: Palette swap, Color cycling, transparent color for sprites.
The user interface is mouse-driven with a toolbar for common tools, and some modal dialog windows. For increased productivity with frequently used functions, an extensive system of keyboard shortcuts is available.
The user can split the editing area in two : normal size on the left, zoomed-in view on the right. Drawing in the zoomed area allows finer mouse control.
The basic drawing concepts are clearly inspired by Deluxe Paint, they involve :
a brush : It's one of the built-in monochrome shape, or a piece of colored bitmap grabbed by the user. The brush appears 'stuck' under the mouse cursor, it gives an accurate preview.
a tool that pastes the brush on the image at several places : Freehand drawing, straight line, circle, curve, airbrush...
optionally, a number of Effects that change the way pixels are drawn: For example, the Shade mode ignores the brush color, it lightens or darkens the picture depending on the mouse button used (and depending on user-defined color ranges). Some of the effects are classic for a 24bit RGB drawing program (Transparency, Smoothing, Smearing), but their effectiveness in GrafX2 is limited according to the colors pre-defined in the palette.
The SDL port currently runs on a lot of computer systems, tested on common systems such as Linux, FreeBSD, Windows, MacOS, and on less common ones such as AmigaOS 3.x on 68k, AmigaOS 4.0 on PPC, BeOS and Haiku, MorphOS on PPC, AROS on x86, SkyOS, Atari MiNT on Atari Falcon030 and Atari TT. It is even ported on the Handheld game console GP2X, and the Windows version can be used on MS-DOS thru HX DOS Extender.
Relation to the demoscene
The first release of GrafX2 was done at the Wired 96 demoparty. The tool was primarily made for demomakers.
This explains the presence of features specific to old computers, because demosceners often use this kind of hardware.
Today, the program is mostly used for Pixel art, not necessarily in relation to demos or to old and limited hardware.
Supported file formats
PKM (Sunset Design) (This is a custom format used only by GrafX2. It was done in the first version as an easy way to save pictures, before the gif format was managed perfectly.)
BMP (Microsoft, BMP file format)
CEL, KCF (K.O.S. Kisekae Set System)
GIF (Compuserve)
IMG (Bivas)
LBM (Electronic Arts) (Support for files from Deluxe Paint, but also a lot of Amiga paint programs)
PAL
PCX (Z-Soft)
PI1, PC1 (Degas Elite)
PNG (Portable Network Graphics) (only in the Windows and SDL ports)
SCx (Colorix)
NEO (NeoChrome)
C64 picture formats (Koala Painter, CDU-Paint, FLI, etc.)
CPC picture formats (PPH, CM5, etc.)
JPEG (only loading)
TGA (Truevision TGA only loading)
TIFF (Aldus Corporation)
RECOIL can be used to load a lot of native file formats of vintage computers.
See also
List of raster graphics editors
Comparison of raster graphics editors
References
External links
project homepage
source code git repository
GrafX2 for Windows (this is an old port of the original DOS code and should not be used anymore)
Linux packages: Debian, Ubuntu
Free raster graphics editors
Free software programmed in C
Raster graphics editors for Linux
Computer art
Demoscene software
Amiga software
BeOS software
Lua (programming language)-scriptable software
Free software that uses SDL |
386661 | https://en.wikipedia.org/wiki/Product%20key | Product key | A product key, also known as a software key, is a specific software-based key for a computer program. It certifies that the copy of the program is original.
Product keys consist of a series of numbers and/or letters. This sequence is typically entered by the user during the installation of computer software, and is then passed to a verification function in the program. This function manipulates the key sequence according to a mathematical algorithm and attempts to match the results to a set of valid solutions.
Effectiveness
Standard key generation, where product keys are generated mathematically, is not completely effective in stopping copyright infringement of software, as these keys can be distributed. In addition, with improved communication from the rise of the Internet, more sophisticated attacks on keys such as cracks (removing the need for a key) and product key generators have become common.
Because of this, software publishers use additional product activation methods to verify that keys are both valid and uncompromised. One method assigns a product key based on a unique feature of the purchaser's computer hardware, which cannot be as easily duplicated since it depends on the user's hardware. Another method involves requiring one-time or periodical validation of the product key with an internet server (for games with an online component, this is done whenever the user signs in). The server can deactivate unmodified client software presenting invalid or compromised keys. Modified clients may bypass these checks, but the server can still deny those clients information or communication.
Controversy
Some of the most effective product key protections are controversial due to inconvenience, strict enforcement, harsh penalties and, in some cases, false positives. Some product keys use uncompromising digital procedures to enforce the license agreement.
Inconvenience
Product keys are somewhat inconvenient for end users. Not only do they need to be entered whenever a program is installed, but the user must also be sure not to lose them. Loss of a product key usually means the software is useless once uninstalled, unless, prior to uninstallation, a key recovery application is used (although not all programs support this).
Product keys also present new ways for distribution to go wrong. If a product is shipped with missing or invalid keys, then the product itself is useless. For example, all copies of Splinter Cell: Pandora Tomorrow originally shipped to Australia without product keys.
Enforcement and penalties
There are many cases of permanent bans enforced by companies detecting usage violations. It is common for an online system to immediately blacklist an account caught running cracks or, in some cases, cheats. This results in a permanent ban. Players who wish to continue use of the software must repurchase it. This has inevitably led to criticism over the motivations of enforcing permanent bans.
Particularly controversial is the situation which arises when multiple products' keys are bound together. If products have dependencies on other products (as is the case with expansion packs), it is common for companies to ban all bound products. For example, if a fake key is used with an expansion pack, the server may ban legitimate keys from the original game. Similarly, with Valve's Steam service, all products the user has purchased are bound into the one account. If this account is banned, the user will lose access to every product associated with the same account.
This "multi-ban" is highly controversial, since it bans users from products which they have legitimately purchased and used.
False positives
Bans are enforced by servers immediately upon detection of cracks or cheats, usually without human intervention. Sometimes, legitimate users are wrongly deemed in violation of the license, and banned. In large cases of false positives, they are sometimes corrected (as happened in World of Warcraft.) However, individual cases may not be given any attention.
A common cause of false positives (as with the World of Warcraft case above) is users of unsupported platforms. For example, users of Linux can run Windows applications through compatibility layers such as Wine and Cedega. This software combination sometimes triggers the game's server anti-cheating software, resulting in a ban due to Wine or Cedega being a Windows API compatibility layer for Linux, so it is considered third-party (cheating) software by the game's server.
See also
Biometric passport
Cryptographic hash function
Intel Upgrade Service
Keygen
License manager
Product activation
Serial number
Software license server
Volume license key
References
Copyright infringement |
6212365 | https://en.wikipedia.org/wiki/Data%20steward | Data steward | A data steward is an oversight or data governance role within an organization, and is responsible for ensuring the quality and fitness for purpose of the organization's data assets, including the metadata for those data assets. A data steward may share some responsibilities with a data custodian, such as the awareness, accessibility, release, appropriate use, security and management of data. A data steward would also participate in the development and implementation of data assets. A data steward may seek to improve the quality and fitness for purpose of other data assets their organization depends upon but is not responsible for.
Data stewards have a specialist role that utilizes an organization's data governance processes, policies, guidelines and responsibilities for administering an organizations' entire data in compliance with policy and/or regulatory obligations. The overall objective of a data steward is the data quality of the data assets, datasets, data records and data elements. This includes documenting metainformation for the data, such as definitions, related rules/governance, physical manifestation, and related data models (most of these properties being specific to an attribute/concept relationship), identifying owners/custodian's various responsibilities, relations insight pertaining to attribute quality, aiding with project requirement data facilitation and documentation of capture rules.
Data stewards begin the stewarding process with the identification of the data assets and elements which they will steward, with the ultimate result being standards, controls and data entry. The steward works closely with business glossary standards analysts (for standards), with data architect/modelers (for standards), with DQ analysts (for controls) and with operations team members (good-quality data going in per business rules) while entering data.
Data stewardship roles are common when organizations attempt to exchange data precisely and consistently between computer systems and to reuse data-related resources. Master data management often makes references to the need for data stewardship for its implementation to succeed. Data stewardship must have precise purpose, fit for purpose or fitness.
Data steward responsibilities
A data steward ensures that each assigned data element:
Has clear and unambiguous data element definition
Does not conflict with other data elements in the metadata registry (removes duplicates, overlap etc.)
Has clear enumerated value definitions if it is of type Code
Is still being used (remove unused data elements)
Is being used consistently in various computer systems
Is being used, fit for purpose = Data Fitness
Has adequate documentation on appropriate usage and notes
Documents the origin and sources of authority on each metadata element
Is protected against unauthorised access or change
Responsibilities of data stewards vary between different organisations and institutions. For example, at Delft University of Technology, data stewards are perceived as the first contact point for any questions related to research data. They also have subject-specific background allowing them to easily connect with researchers and to contextualise data management problems to take into account disciplinary practices.
Types of data stewards
Depending on the set of data stewardship responsibilities assigned to an individual, there are 4 types (or dimensions of responsibility) of data stewards typically found within an organization:
Data object data steward - responsible for managing reference data and attributes of one business data entity
Business data steward - responsible for managing critical data, both reference and transactional, created or used by one business function
Process data steward - responsible for managing data across one business process
System data steward - responsible for managing data for at least one IT system
Benefits of data stewardship
Systematic data stewardship can foster:
Faster analysis
Consistent use of data management resources
Easy mapping of data between computer systems and exchange documents
Lower costs associated with migration to (for example) Service Oriented Architecture (SOA)
Better control of dangers associated with privacy, legal, errors, etc.
Assignment of each data element to a person sometimes seems like an unimportant process. But many groups have found that users have greater trust and usage rates in systems where they can contact a person with questions on each data element.
Examples
Delft University of Technology (TU Delft) offers an example of data stewardship implementation at a research institution. In 2017 the Data Stewardship Project was initiated at TU Delft to address research data management needs in a disciplinary manner across the whole campus. Dedicated data stewards with subject-specific background were appointed at every TU Delft faculty to support researchers with data management questions and to act as a linking point with the other institutional support services. The project is coordinated centrally by TU Delft Library, and it has its own website, blog and a YouTube channel.
The EPA metadata registry furnishes an example of data stewardship. Note that each data element therein has a "POC" (point of contact).
Data stewardship applications
A new market for data governance applications is emerging, one in which both technical and business staff — stewards — manage policies. These new applications, like previous generations, deliver a strong business glossary capability, but they do not stop there. Vendors are introducing additional features addressing the roles of business in addition to technical stewards' concerns.
Information stewardship applications are business solutions used by business users acting in the role of information steward (interpreting and enforcing information governance policy, for example). These developing solutions represent, for the most part, an amalgam of a number of disparate, previously IT-centric tools already on the market, but are organized and presented in such a way that information stewards (a business role) can support the work of information policy enforcement as part of their normal, business-centric, day-to-day work in a range of use cases.
The initial push for the formation of this new category of packaged software came from operational use cases — that is, use of business data in and between transactional and operational business applications. This is where most of the master data management efforts are undertaken in organizations. However, there is also now a faster-growing interest in the new data lake arena for more analytical use cases.
Some of the vendors in Metadata Management, like Alation, have started highlighting the importance of Data Stewards to employees interested in using data to make business decisions.
See also
Metadata
Metadata registry
Data curation
Data element
Data element definition
Representation term
ISO/IEC 11179
References
Universal Meta Data Models, by David Marco and Michael Jennings, Wiley, 2004, page 93-94
Metadata Solution by Adrinne Tannenbaum, Addison Wesley, 2002, page 412
Building and Managing the Meta Data Repository, by David Marco, Wiley, 2000, pages 61–62
The Data Warehouse Lifecycle Toolkit, by Ralph Kimball et. el., Wiley, 1998, also briefly mentions the role of data steward in the context of data warehouse project management on page 70.
Developing Geospatial Intelligence Stewardship for Multinational Operations, by Jeff Thomas, US Army Command General Staff College, 2010, www.dtic.mil/dtic/tr/fulltext/u2/a524227.pdf.
Notes
Data management
Information technology governance
Knowledge representation
Library occupations
Metadata
Technical communication |
1474380 | https://en.wikipedia.org/wiki/Pixar%20Image%20Computer | Pixar Image Computer | The Pixar Image Computer is a graphics computer originally developed by the Graphics Group, the computer division of Lucasfilm, which was later renamed Pixar. Aimed at commercial and scientific high-end visualization markets, such as medicine, geophysics and meteorology, the original machine was advanced for its time, but sold poorly.
History
Creation
When George Lucas recruited people from NYIT in 1979 to start their Computer Division, the group was set to develop digital optical printing, digital audio, digital non-linear editing and computer graphics. Computer graphics quality was just not good enough due to technological limitations at the time. The team then decided to solve the problem by starting a hardware project, building what they would call the Pixar Image Computer, a machine with more computational power that was able to produce images with higher resolution.
Availability
About three months after their acquisition by Steve Jobs on February 3, 1986, the computer became commercially available for the first time, and was aimed at commercial and scientific high-end visualization markets, such as medical imaging, geophysics, and meteorology. The machine sold for $135,000, but also required a $35,000 workstation from Sun Microsystems or Silicon Graphics (in total, ). The original machine was well ahead of its time and generated many single sales, for labs and research. However, the system did not sell in quantity.
In 1987, Pixar redesigned the machine to create the P-II second generation machine, which sold for $30,000. In an attempt to gain a foothold in the medical market, Pixar donated ten machines to leading hospitals and sent marketing people to doctors' conventions. However, this had little effect on sales, despite the machine's ability to render CAT scan data in 3D to show perfect images of the human body. Pixar did get a contract with the manufacturer of CAT Scanners, which sold 30 machines. By 1988 Pixar had only sold 120 Pixar Image Computers.
In 1988, Pixar began the development of the PII-9, a nine slot version of the low cost P-II. This machine was coupled with a very early RAID model, a high performance bus, a hardware image decompression card, 4 processors (called Chaps or channel processors), very large memory cards (VME sized card full of memory), high resolutions video cards with 10-bit DACs which were programmable for a variety of frame rates and resolutions, and finally an overlay board which ran NeWS, as well as the 9 slot chassis. A full-up system was quite expensive, as the 3 GiB RAID was $300,000 alone. At this time in history most file systems could only address 2 GiB of disk. This system was aimed at high-end government imaging applications, which were done by dedicated systems produced by the aerospace industry and which cost a million dollars a seat. The PII-9 and the associated software became the prototype of the next generation of commercial "low cost" workstations.
Demise and legacy
In 1990, the Pixar Image Computer was defining the state-of-the-art in commercial image processing. Despite this, the government decided that the per-seat cost was still too high for mass deployment and to wait for the next generation systems to achieve cost reductions. This decision was the catalyst for Pixar to lay off its hardware engineers and sell the imaging business. There were no high volume buyers in any industry. Fewer than 300 Pixar Image Computers were ever sold.
The Pixar computer business was sold to Vicom Systems in 1990 for $2,000,000. Vicom Systems filed for Chapter 11 within a year afterwards.
Many of the lessons learned from the Pixar Image Computer made it into the Low Cost Workstation (LCWS) and Commercial Analyst Workstation (CAWS) program guidelines in the early and mid 1990s. The government mass deployment that drove the PII-9 development occurred in the late 1990s, in a program called Integrated Exploitation Capability (IEC).
Design
The P-II could have two Channel Processors, or Chaps. The chassis could hold 4 cards. The PII-9 could hold 9 cards (4 Chaps, 2 video processors, 2 Off Screen Memory (OSM) cards, and an Overlay Board for the NeWS windowing system). NeWS was extended to control the image pipeline for roaming, image comparison, and stereo image viewing.
Each Chap is a 4-way parallel (RGBA) image computer. This was a SIMD architecture, which was good for imagery and video applications. It processed four image channels in parallel, one for red, one for green, one for blue, and one for the alpha channel (whose inventors have connections to Pixar). Images were stored with 12 bits per color channel (or 48 bits per pixel). The 12-bit data represented an unusual (for today) fixed-point format that ranged from -1.5 to 2.5 using 2 bits for the integer portion, meaning the range from 0 to 1 had 10 bit accuracy.
A Unix host machine was generally needed to operate it (to provide a keyboard and mouse for user input). The system could communicate image data externally over an 80M per second "Yapbus" or a 2M per second multibus to other hosts, data sources, or disks, and had a performance measured equivalent to 200 VUPS, or 200 times the speed of a VAX-11/780.
Use
Walt Disney Feature Animation, whose parent company later purchased Pixar in 2006, used dozens of the Pixar Image Computers for their Computer Animation Production System (CAPS) and was using them in production up through Pocahontas in 1995.
References
External links
Computer workstations
Pixar
SIMD computing |
7173858 | https://en.wikipedia.org/wiki/Orphaned%20technology | Orphaned technology | Orphaned technology is a descriptive term for computer products, programs, and platforms that have been abandoned by their original developers. Orphaned technology refers to software, such as abandonware and antique software, but also to computer hardware and practices. In computer software standards and documentation, deprecation is the gradual phasing-out of a software or programming language feature, while orphaning usually connotes a sudden discontinuation, usually for business-related reasons, of a product with an active user base.
For users of technologies that have been withdrawn from the market, there is a choice between maintaining their software support environments in some form of emulation, or switching to other supported products, possibly losing capabilities unique to their original solution.
Abandoning a technology is not only due to bad or outmoded idea. There are instances, such as the case of some medical technologies, where products are phased out the market because they are no longer viable as business ventures. Some orphaned technologies do not suffer complete abandonment or obsolescence. For instance, there is the case of IBM's Silicon Germanium (SiGe) technology, which is a program that produced an in situ doped alloy as a replacement for the conventional implantation step in silicon semiconductor bipolar process. The technology was previously orphaned but was continued again by a small team at IBM so that it emerged as a leading product in the high-volume communications marketplace. Technologies orphaned due to failure on the part of their startup developers can be picked up by another investor. This is demonstrated by Wink, an IoT technology orphaned when its parent company Quirky filed for bankruptcy. The platform, however, continued after it was purchased by another company called Flex.
Some well-known examples of orphaned technology include:
Coleco ADAM - 8-bit home computer
TI 99/4A - 16-bit home computer
Mattel Aquarius
Apple Lisa - 16/32-bit graphical computer
Newton PDA (Apple Newton) - tablet computer
DEC Alpha - 64-bit microprocessor
HyperCard - hypermedia
ICAD (KBE) - knowledge-based engineering
Javelin Software - modeling and data analysis
LISP machines - LISP oriented computers
Classic Mac OS - m68k and PowerPC operating system
Microsoft Bob - graphical helper
Windows 9x - x86 operating system
OpenDoc - compound documents (Mac OS, OS/2)
Prograph - visual programming system
Poly-1 - parallel networked computer designed in New Zealand for use in education and training
Mosaic notation program - music notation application by Mark of the Unicorn
Open Music System - Gibson
Symbolics Inc's operating systems, Genera and OpenGenera, were twice orphaned, as they were ported from LISP machines to computers using the Alpha 64-bit CPU.
User groups often exist for specific orphaned technologies, such as The Hong Kong Newton User Group, Symbolics Lisp [Machines] Users' Group (now known as the Association of Lisp Users), and Newton Reference. The Save Sibelius group sprang into existence because Sibelius (scorewriter) users feared the application would be orphaned after its owners Avid Tech fired most of the development team, who were thereafter hired by Steinberg to develop the competing product, Dorico.
See also
Orphan works
Abandonware
Planned obsolescence
References
Orphan works
Technological change
Information technology |
10439244 | https://en.wikipedia.org/wiki/Linux%20startup%20process | Linux startup process | Linux startup process is the multi-stage initialization process performed during booting a Linux installation. It is in many ways similar to the BSD and other Unix-style boot processes, from which it derives.
Booting a Linux installation involves multiple stages and software components, including firmware initialization, execution of a boot loader, loading and startup of a Linux kernel image, and execution of various startup scripts and daemons. For each of these stages and components there are different variations and approaches; for example, GRUB, coreboot or Das U-Boot can be used as boot loaders (historical examples are LILO, SYSLINUX or Loadlin), while the startup scripts can be either traditional init-style, or the system configuration can be performed through modern alternatives such as systemd or Upstart.
Overview
Early stages of the Linux startup process depend very much on the computer architecture. IBM PC compatible hardware is one architecture Linux is commonly used on; on these systems, the BIOS plays an important role, which might not have exact analogs on other systems. In the following example, IBM PC compatible hardware is assumed:
The BIOS performs startup tasks like the Power-on self-test specific to the actual hardware platform. Once the hardware is enumerated and the hardware which is necessary for boot is initialized correctly, the BIOS loads and executes the boot code from the configured boot device.
The boot loader often presents the user with a menu of possible boot options and has a default option, which is selected after some time passes. Once the selection is made, the boot loader loads the kernel into memory, supplies it with some parameters and gives it control.
The kernel, if compressed, will decompress itself. It then sets up system functions such as essential hardware and memory paging, and calls start_kernel() which performs the majority of system setup (interrupts, the rest of memory management, device and driver initialization, etc.). It then starts up, separately, the idle process, scheduler, and the init process, which is executed in user space.
The init either consists of scripts that are executed by the shell (sysv, bsd, runit) or configuration files that are executed by the binary components (systemd, upstart). Init has specific levels (sysv, bsd) or targets (systemd), each of which consists of specific set of services (daemons). These provide various non-operating system services and structures and form the user environment. A typical server environment starts a web server, database services, and networking.
The typical desktop environment begins with a daemon, called the display manager, that starts a graphic environment which consists of a graphical server that provides a basic underlying graphical stack and a login manager that provides the ability to enter credentials and select a session. After the user has entered the correct credentials, the session manager starts a session. A session is a set of programs such as UI elements (panels, desktops, applets, etc.) which, together, can form a complete desktop environment.
On shutdown, init is called to close down all user space functionality in a controlled manner. Once all the other processes have terminated, init makes a system call to the kernel instructing it to shut the system down.
Boot loader phase
The boot loader phase varies by computer architecture. Since the earlier phases are not specific to the operating system, the BIOS-based boot process for x86 and x86-64 architectures is considered to start when the master boot record (MBR) code is executed in real mode and the first-stage boot loader is loaded. In UEFI systems, the Linux kernel can be executed directly by UEFI firmware via EFISTUB, but usually uses GRUB 2 or systemd-boot as a boot loader. Below is a summary of some popular boot loaders:
GRUB 2 differs from GRUB 1 by being capable of automatic detection of various operating systems and automatic configuration. The stage1 is loaded and executed either by the BIOS from the Master boot record (MBR). The intermediate stage loader (stage1.5, usually core.img) is loaded and executed by the stage1 loader. The second-stage loader (stage2, the /boot/grub/ files) is loaded by the stage1.5 and displays the GRUB startup menu that allows the user to choose an operating system or examine and edit startup parameters. After a menu entry is chosen and optional parameters are given, GRUB loads the linux kernel into memory and passes control to it. GRUB 2 is also capable of chain-loading of another boot loader. In UEFI systems, the stage1 and stage1.5 usually are the same UEFI application file (such as grubx64.efi for x64 UEFI systems).
systemd-boot (formerly Gummiboot), a bootloader included with systemd that requires minimal configuration (for UEFI systems only).
SYSLINUX/ISOLINUX is a boot loader that specializes in booting full Linux installations from FAT filesystems. It is often used for boot or rescue floppy discs, live USBs, and other lightweight boot systems. ISOLINUX is generally used by Linux live CDs and bootable install CDs.
rEFInd, a boot manager for UEFI systems.
coreboot is a free implementation of the UEFI or BIOS and usually deployed with the system board, and field upgrades provided by the vendor if need be. Parts of coreboot becomes the systems BIOS and stays resident in memory after boot.
Das U-Boot is a boot loader for embedded systems. It is used on systems that does not have a BIOS/UEFI but rather employ custom methods to read the boot loader into memory and execute it.
Historical boot loaders, no longer in common use, include:
LILO does not understand or parse filesystem layout. Instead, a configuration file (/etc/lilo.conf) is created in a live system which maps raw offset information (mapper tool) about location of kernel and ram disks (initrd or initramfs). The configuration file, which includes data such as boot partition and kernel pathname for each, as well as customized options if needed, is then written together with bootloader code into MBR bootsector. When this bootsector is read and given control by BIOS, LILO loads the menu code and draws it then uses stored values together with user input to calculate and load the Linux kernel or chain-load any other boot-loader.
GRUB 1 includes logic to read common file systems at run-time in order to access its configuration file. This gives GRUB 1 ability to read its configuration file from the filesystem rather than have it embedded into the MBR, which allows it to change the configuration at run-time and specify disks and partitions in a human-readable format rather than relying on offsets. It also contains a command-line interface, which makes it easier to fix or modify GRUB if it is misconfigured or corrupt.
Loadlin is a boot loader that can replace a running DOS or Windows 9x kernel with the Linux kernel at run time. This can be useful in the case of hardware that needs to be switched on via software and for which such configuration programs are proprietary and only available for DOS. This booting method is less necessary nowadays, as Linux has drivers for a multitude of hardware devices, but it has seen some use in mobile devices. Another use case is when the Linux is located on a storage device which is not available to the BIOS for booting: DOS or Windows can load the appropriate drivers to make up for the BIOS limitation and boot Linux from there.
Kernel phase
The Linux kernel handles all operating system processes, such as memory management, task scheduling, I/O, interprocess communication, and overall system control. This is loaded in two stages – in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as basic memory management are set up. Control is then switched one final time to the main kernel start process. Once the kernel is fully operational – and as part of its startup, upon being loaded and executing – the kernel looks for an init process to run, which (separately) sets up a user space and the processes needed for a user environment and ultimate login. The kernel itself is then allowed to go idle, subject to calls from other processes.
For some platforms (like ARM 64-bit), kernel decompression has to be performed by the boot loader instead.
The kernel is typically loaded as an image file, compressed into either zImage or bzImage formats with zlib. A routine at the head of it does a minimal amount of hardware setup, decompresses the image fully into high memory, and takes note of any RAM disk if configured. It then executes kernel startup via ./arch/i386/boot/head and the startup_32 () (for x86 based processors) process.
The startup function for the kernel (also called the swapper or process 0) establishes memory management (paging tables and memory paging), detects the type of CPU and any additional functionality such as floating point capabilities, and then switches to non-architecture specific Linux kernel functionality via a call to start_kernel().
start_kernel executes a wide range of initialization functions. It sets up interrupt handling (IRQs), further configures memory, starts the Init process (the first user-space process), and then starts the idle task via cpu_idle(). Notably, the kernel startup process also mounts the initial RAM disk ("initrd") that was loaded previously as the temporary root file system during the boot phase. The initrd allows driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk) and the drivers that are needed to access them (e.g. a SATA driver). This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel. The root file system is later switched via a call to pivot_root() which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible. The memory used by the temporary root file system is then reclaimed.
Thus, the kernel initializes devices, mounts the root filesystem specified by the boot loader as read only, and runs Init (/sbin/init) which is designated as the first process run by the system (PID = 1). A message is printed by the kernel upon mounting the file system, and by Init upon starting the Init process. It may also optionally run Initrd to allow setup and device related matters (RAM disk or similar) to be handled before the root file system is mounted.
According to Red Hat, the detailed kernel process at this stage is therefore summarized as follows:
"When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed initrd image in a predetermined location in memory, decompresses it, mounts it, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID before unmounting the initrd disk image and freeing up all the memory the disk image once occupied. The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory. At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with it." An initramfs-style boot is similar, but not identical to the described initrd boot.
At this point, with interrupts enabled, the scheduler can take control of the overall management of the system, to provide pre-emptive multi-tasking, and the init process is left to continue booting the user environment in user space.
Early user space
initramfs, also known as early user space, has been available since version 2.5.46 of the Linux kernel, with the intent to replace as many functions as possible that previously the kernel would have performed during the start-up process. Typical uses of early user space are to detect what device drivers are needed to load the main user space file system and load them from a temporary filesystem. Many distributions use dracut to generate and maintain the initramfs image.
Init process
Once the kernel has started, it starts the init process. Historically this was the "SysV init", which was just called "init". More recent Linux distributions are likely to use one of the more modern alternatives such as systemd.
Basically, these are grouped as operating system service-management.
See also
SYSLINUX
Windows startup process
References
External links
Greg O'Keefe - From Power Up To Bash Prompt
a developerWorks article by M. Tim Jones
Bootchart: Boot Process Performance Visualization
The bootstrap process on EFI systems, LWN.net, February 11, 2015, by Matt Fleming
Booting
Linux
Linux kernel |
3576372 | https://en.wikipedia.org/wiki/SevenDust%20%28computer%20virus%29 | SevenDust (computer virus) | SevenDust is a computer virus that infects computers running certain versions of the classic Mac OS. It was first discovered in 1998, and originally referred to as 666 by Apple.
SevenDust is a polymorphic virus, with some variant also being encrypted. It spreads by users running an infected executable. Some variants of SevenDust also delete all non-application files accessed during certain times.
See also
Computer virus
Comparison of computer viruses
References
External links
666, by McAfee
Classic Mac OS viruses
1998 in technology |
4986342 | https://en.wikipedia.org/wiki/APEX%20system | APEX system | APEX stands for Additive System of Photographic Exposure, which
was proposed in the 1960 ASA standard
for monochrome film speed, ASA PH2.5-1960,
as a means of simplifying exposure computation.
Exposure equation
Until the late 1960s, cameras did not have built-in exposure meters, and
many photographers did not have external exposure meters. Consequently,
it often was necessary to calculate exposure from
lighting conditions. The relationship of recommended photographic exposure
to a scene's average luminance is given by the camera exposure equation
where
is the f-number (reciprocal of the relative aperture)
is the exposure time ("shutter speed") in seconds
is the average scene luminance ("brightness")
is the ASA arithmetic film speed
is the reflected-light meter calibration constant
Use of the symbol for luminance reflects photographic
industry practice at the time of ASA PH2.5-1960; current SI practice prefers the symbol . German sources typically used for the relative aperture. Many authors now use
and for relative aperture and exposure
time.
Recommendations for the value of the calibration constant in
applicable ANSI and ISO standards have varied slightly over the
years; this topic is discussed in greater detail under
Exposure meter calibration
in the Light meter article.
Exposure value
In an attempt to simplify choosing among combinations of equivalent camera settings, the concept of exposure values (German: Lichtwert) was originally developed and proposed to other manufacturers by the German shutter manufacturer in the early 1950s. Combinations of shutter speed and relative aperture that resulted in the same exposure were said to have the same exposure value , a base-2 logarithmic scale defined by
When applied to the left-hand side of the exposure equation, denoted combinations of camera settings; when applied to the right-hand side, denoted combinations of luminance and film speed. For a given film speed, the recommended exposure value was determined solely by the luminance. Once the exposure value was determined, it could be directly set on cameras with an scale. Adjustment of exposure was simple, because a change of 1 corresponded to a change of 1 exposure step, i.e., either a halving or doubling of exposure.
Starting 1954, the so-called Exposure Value Scale (EVS), originally known as Light Value Scale (LVS), was adopted by Rollei, Hasselblad, Voigtländer, Braun, Kodak, Seikosha, Aires, Konica, Olympus, Ricoh and others, introducing lenses with coupled shutters and apertures, such that, after setting the exposure value, adjusting either the shutter speed or aperture made a corresponding adjustment in the other to maintain a constant exposure. On some models, the coupling of shutter speed and aperture setting was optional, so that photographers could choose their preferred method of working depending on the situation. Use of the scale on such cameras is discussed briefly by Adams (1981, 39).
Modern cameras no longer display exposure values as such, but continue to offer exposure modes, which support users in employing the concept of counter-adjusting shutter speed and aperture at a fixed point of exposure. This can be found in features such as Manual Shift on some Minolta, Konica Minolta and Sony Alpha or Hyper Manual on some Pentax (D)SLRs since 1991, where the photographer can change one of the parameters, and the camera will adjust the other accordingly for as long as the Auto-Exposure Lock (AEL) function is activated. In a wider sense, functions like , Pa / Ps Creative Program Control (by Minolta, Konica Minolta and Sony) or Hyper Program (by Pentax) belong to this group of features as well.
The additive (logarithmic) system
Although some photographers (Adams 1981, 66) routinely determined camera settings using the exposure equation, it generally was assumed that doing so would prove too daunting for the casual photographer. The 1942 ASA exposure guide, ASA Z38.2.2-1942, featured a dial calculator,
and revisions in 1949 and 1955 used a similar approach.
An alternative simplification also was possible: ASA PH2.5-1960 proposed extending the concept of exposure value to all exposure parameters. Taking base-2 logarithms of both sides of the exposure equation and separating numerators and denominators reduces exposure calculation to a matter of addition:
where
is the aperture value:
is the time value:
is the exposure value: .
is the speed value (aka sensitivity value):
is the luminance value (aka brightness value):
is a constant that establishes the relationship between the ASA arithmetic film speed and the ASA speed value . The value of is approximately 0.30 (precisely, ).
is the reflected-light meter calibration constant
ASA standards covered incident-light meters as well as reflected-light meters; the incident-light exposure equation is
where
is the scene illuminance
is the incident-light meter calibration constant
The use of for illuminance reflects photographic industry practice at the time of the 1961 ASA standard for exposure meters, ASA PH2.12-1961; current SI practice prefers the symbol .
ASA PH2.12-1961 included incident-light metering in the APEX concept:
where
is the incident-light value:
(German sources typically use (for Lichtwert or Belichtungswert — but not to be confused with the English term light value) instead of the exposure value's symbol . Consequently, the aperture value is referred to as Blendenleitwert , and the time value as Zeitleitwert . The film speed value is named Empfindlichkeitsleitwert, and the brightness value is known as Objekthelligkeit.)
APEX in practice
APEX made exposure computation a relatively simple matter; the foreword of ASA PH2.5-1960 recommended that exposure meters, exposure calculators, and exposure tables be modified to incorporate the logarithmic values that APEX required. In many instances, this was done: the 1973 and 1986 ANSI exposure guides, ANSI PH2.7-1973 and ANSI PH2.7-1986, eliminated exposure calculator dials in favor of tabulated APEX values. However, the logarithmic markings for aperture and shutter speed required to set the computed exposure were never incorporated in consumer cameras. Accordingly, no reference to APEX was made in ANSI PH3.49-1971 (though it was included in the Appendix). The incorporation of exposure meters in many cameras in the late 1960s eliminated the need to compute exposure, so APEX saw little actual use.
With the passage of time, formatting of APEX quantities has varied considerably; although the originally was subscript, it sometimes was given simply as lower case, and sometimes as uppercase. Treating these quantities as acronyms rather than quantity symbols probably is reasonable because several of the quantity symbols (, , and for exposure,
luminance, and illuminance) used at the time APEX was proposed are in conflict with current preferred SI practice.
A few artifacts of APEX remain. Canon, Pentax and Leica cameras use 'Av' and 'Tv' to indicate relative aperture and shutter speed as well as to symbolize aperture priority and shutter priority modes. Some Pentax DSLRs even provide a 'TAv' exposure mode to automatically set the ISO speed depending on the desired aperture and shutter settings, and 'Sv' (for sensitivity priority) to pre-set the ISO speed and let the camera choose the other parameters. Some meters, such as Pentax spot meters, directly indicate the exposure value for ISO 100 film speed. For a given film speed, exposure value is directly related to luminance, although the relationship depends on the reflected-light meter calibration constant . Most photographic equipment manufacturers specify metering sensitivities in EV at ISO 100 speed (the uppercase 'V' is almost universal).
It is common to express exposure increments in EV, as when adjusting exposure relative to what a light meter indicates (Ray 2000, 316). For example, an exposure compensation of +1 EV (or +1 step) means to increase exposure, by using either a longer exposure time or a smaller -number.
The sense of exposure compensation is opposite that of the EV scale itself.
An increase in exposure corresponds to a decrease in EV, so an exposure compensation of
+1 EV results in a smaller EV; conversely, an exposure compensation of −1 EV results in a greater EV.
Use of APEX values in Exif
APEX has seen a partial resurrection in the Exif standard, which calls for storing exposure data using APEX values. There are some minor differences from the original APEX in both terminology and values. The
implied value (1/3.125) for the speed scaling constant given in the Exif 2.2 specification ("Exif 2.2"; JEITA 2002) differs slightly from the APEX value of (0.2973); with the Exif value, an ISO arithmetic film speed of 100 corresponds exactly to a speed value of 5.
The relationship between and luminance depends on both the speed scaling constant and the reflected-light meter calibration constant :
Because Exif 2.2 records ISO arithmetic speed rather than film sensitivity, the value of affects the recorded value of but not the recorded film speed.
Exif 2.2 does not recommend a range of values for , presumably leaving the choice to the equipment manufacturer. The example data in Annex C of Exif 2.2 give 1 footlambert for = 0. This is in agreement with the APEX value for , but would imply , or 3.125 with in footlamberts. With in cd/m2, this becomes 10.7, which is slightly less than the value of 12.5 recommended by ISO 2720:1974 and currently used by many manufacturers. The difference possibly arises from rounding in the example table; it also is possible that the example data simply were copied from an old ASA or ANSI standard.
Notes
References
Adams, Ansel. 1981. The Negative. Boston: New York Graphic Society.
ANSI PH2.7-1973. American National Standard Photographic Exposure Guide. New York: American National Standards Institute. Superseded by ANSI PH2.7-1986.
ANSI PH2.7-1986. American National Standard for Photography — Photographic Exposure Guide. New York: American National Standards Institute.
ANSI PH3.49-1971. American National Standard for general-purpose photographic exposure meters (photoelectric type). New York: American National Standards Institute. After several revisions, this standard was withdrawn in favor of ISO 2720:1974.
ASA PH2.5-1960. American Standard Method for Determining Speed of photographic Negative Materials (Monochrome, Continuous Tone). New York: United States of America Standards Institute.
ASA PH2.12-1961. American Standard, General-Purpose Photographic Exposure Meters (photoelectric type). New York: American Standards Association. Superseded by ANSI PH3.49-1971.
ASA Z38.2.2-1942. American Emergency Standard Photographic Exposure Computer. New York: American Standards Association.
ASA Z38.2.6-1948. American Standard for General-Purpose Photographic Exposure Meters (Photoelectric Type). New York: American Standards Association. Superseded by ASA PH2.12-1957.
ISO 2720:1974. General Purpose Photographic Exposure Meters (Photoelectric Type)—Guide to Product Specification. International Organization for Standardization.
Japan Electronics and Information Technology Industries Association. 2002. JEITA CP-3451, Exchangeable image file format for digital still cameras: Exif Version 2.2 (PDF). Japan Electronics and Information Technology Industries Association.
JEITA. See Japan Electronics and Information Technology Industries Association.
Ray, Sidney F. 2000. Camera Exposure Determination. In The Manual of Photography: Photographic and Digital Imaging, 9th ed. Ed. Ralph E. Jacobson, Sidney F. Ray, Geoffrey G. Atteridge, and Norman R. Axford. Oxford: Focal Press.
External links
Doug Kerr's in-depth description of APEX
Photographic techniques |
95928 | https://en.wikipedia.org/wiki/Z%20shell | Z shell | The Z shell (Zsh) is a Unix shell that can be used as an interactive login shell and as a command interpreter for shell scripting. Zsh is an extended Bourne shell with many improvements, including some features of Bash, ksh, and tcsh.
History
Paul Falstad wrote the first version of Zsh in 1990 while a student at Princeton University. The name zsh derives from the name of Yale professor Zhong Shao (then a teaching assistant at Princeton University) — Paul Falstad regarded Shao's login-id, "zsh", as a good name for a shell.
Zsh was at first intended to be a subset of csh for the Commodore Amiga, but expanded far beyond that. By the time of the release of version 1.0 in 1990 the aim was to be a cross between ksh and tcsh – a powerful "command and programming language" that is well-designed and logical (like ksh), but also built for humans (like tcsh), with all the neat features like spell checking, login/logout watching and termcap support that were "probably too weird to make it into an AT&T product".
Zsh is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
In 2019, macOS Catalina adopted Zsh as the default login shell, replacing the GPLv2 licensed version of Bash, and when Bash is run interactively on Catalina, a warning is shown by default.
In 2020, Kali Linux adopted Zsh as the default shell since its 2020.4 release.
Features
Features include:
Programmable command-line completion that can help the user type both options and arguments for most used commands, with out-of-the-box support for several hundred commands
Sharing of command history among all running shells
Extended file globbing allows file specification without needing to run an external program such as find
Improved variable/array handling
Editing of multi-line commands in a single buffer
Spelling correction and autofill of command names (and optionally arguments, assumed to be file names)
Various compatibility modes, e.g. Zsh can pretend to be a Bourne shell when run as /bin/sh
Themeable prompts, including the ability to put prompt information on the right side of the screen and have it auto-hide when typing a long command
Loadable modules, providing among other things: full TCP and Unix domain socket controls, an FTP client, and extended math functions.
The built-in where command. Works like the which command but shows all locations of the target command in the directories specified in $PATH rather than only the one that will be used.
Named directories. This allows the user to set up shortcuts such as ~mydir, which then behave the way ~ and ~user do.
Community
A user community website known as "Oh My Zsh" collects third-party plug-ins and themes for the Z shell. As of 2021, their GitHub repository has over 1900 contributors, over 300 plug-ins, and over 140 themes. It also comes with an auto-update tool that makes it easier to keep installed plug-ins and themes updated.
See also
Comparison of command shells
References
External links
ZSH Wiki
Scripting languages
Unix shells |
17079365 | https://en.wikipedia.org/wiki/Mark%20Gasson | Mark Gasson | Mark N. Gasson is a British scientist and visiting research fellow at the Cybernetics Research Group, University of Reading, UK. He pioneered developments in direct neural interfaces between computer systems and the human nervous system, has developed brain–computer interfaces and is active in the research fields of human microchip implants, medical devices and digital identity. He is known for his experiments transmitting a computer virus into a human implant, and is credited with being the first human infected with a computer virus.
Gasson has featured on television documentaries including Through the wormhole with Morgan Freeman, international television and radio news programs, and has delivered public lectures discussing his work including at TEDx. In 2010 Gasson was the General chair for the IEEE International Symposium on Technology and Society 2010 (ISTAS'10) and in 2014 he was entered into the Guinness Book of Records for his experimental work on implantable microchips.
He is currently based in Los Angeles, California.
Early life and education
Gasson obtained his first degree in Cybernetics and Control Engineering in 1998 from the Department of Cybernetics at Reading. He obtained his Ph.D. for 2002 work on interfacing the nervous system of a human to a computer system in 2005.
Career
From 2000 until 2005 Gasson headed research to invasively interface the nervous system of a human to a computer. In 2002 a microelectrode array was implanted in the median nerve of a healthy human and connected percutaneously to a bespoke processing unit to allow stimulation of nerve fibers to artificially generate sensation perceivable by the subject and recording of local nerve activity to form control commands for wirelessly connected devices.
During clinical evaluation of the implant, the nervous system of the human subject, Kevin Warwick, was connected onto the internet in Columbia University, New York enabling a robot arm, developed by Peter Kyberd, in the University of Reading UK to use the subject's neural signals to mimic the subject's hand movements while allowing the subject to perceive what the robot touched from sensors in the robot's finger tips. Further studies also demonstrated a form of extra sensory input and that it was possible to communicate directly between the nervous systems of two individuals, the first direct and purely electronic communication between the nervous systems of two humans, with a view to ultimately creating a form of telepathy or empathy using the Internet to communicate 'brain-to-brain'. Because of the potentially wide reaching implications for human enhancement of the research discussed by Gasson and his group, the work was dubbed 'Project Cyborg' by the media.
As of 2005, this was the first study in which this type of implant had been used with a human subject and Gasson was subsequently awarded a PhD for this work.
Invasive brain interfaces (2005)
Gasson and his colleagues, together with neurosurgeon Tipu Aziz and his team at John Radcliffe Hospital, Oxford, and physiologist John Stein of the University of Oxford, have been working on Deep brain stimulation for movement disorders such as Parkinson's disease.
In order to improve control of abnormal spontaneous electrical activity in the brains of patients with movement disorders, as of 2010 they have been developing a combined deep brain recording and stimulating device that will record deep brain signals and from these predict the onset of symptoms such as tremor and dystonic bursts and deliver a short pulse of high frequency stimulation to stop the symptoms before they have even started.
The Future of Identity (2004–2009)
From 2004 to 2009 Gasson headed a group of academics and industry professionals drawn from 24 institutions across Europe as part of the European Commission funded FIDIS project targeting various aspects of digital identity and privacy, in particular emerging technologies used for identification and profiling. As well as authoring reports on profiling, ambient intelligence and ICT implants, Gasson also went public over privacy concerns related to misuse of location information from GPS devices in smartphones, and was a contributor to FIDIS's controversial Budapest Declaration on Machine Readable Travel Documents which criticized European governments for forcing their citizens to adopt electronic passports which "decrease security and privacy and increase the risk of identity theft".
First human infected with computer virus (2009)
In March 2009 Gasson had a glass capsule RFID device surgically implanted into his left hand. The implant was used as an identification device for the University of Reading’s intelligent building infrastructure to gain building access. Gasson’s smartphone was also augmented with a reader so that the phone would only function when he was holding it.
In April 2010 following experiments showing the potential vulnerabilities of implantable technology, Gasson's team demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems. Gasson drew parallels with other implantable devices, such as cardiac pacemakers, which he asserted were vulnerable because of a tendency of manufacturers to adopt a "security through obscurity" methodology rather than robust security methods.
He also argued that as functions of the body are restored or enhanced by implanted devices, the boundaries of the body (i.e., the human experience of the body’s delimitation) become increasingly unclear. As a result, the separation between man and machine simply becomes theoretical, meaning that the technology becomes perceived by the human as being a part of their body and so should be considered as such. He argues that this development in our traditional notion of what constitutes our body and its boundaries has two notable repercussions: Firstly, it becomes possible to talk in terms of a human, albeit a technologically enhanced human, becoming for instance infected by a computer virus or hacked by a third party. This forms the basis of his claim to be the first human infected by a computer virus. Secondly, this development of the concept of the is considered a fundamental right.
In 2010 Gasson was the General chair for the IEEE International Symposium on Technology and Society 2010 (ISTAS'10).
Research
Gasson is a proponent of human enhancement using technology implanted into the body, and argues that advanced medical device technology will inevitably drift to non-medical augmentation applications in humans. He also strongly argues that with technology implanted in humans, the separation between man and machine can become theoretical because the technology can be perceived by the human as being a part of their body. Because of this he reasons that, as the boundaries of the human body (the human experience of the body’s delimitation) become increasingly unclear, it should be accepted that the technology augmentation is a part of the body.
Gasson is an advocate of interdisciplinary collaboration and co-authors with social scientists, philosophers, legal researchers and ethicists to consider the wider implications of his field.
Controversy
The research attracted criticism from computer security blogger Graham Cluley who stated "Predictions of pacemakers and cochlear implants being hit by virus infections is the very worst kind of scaremongering". In 2012 academic Prof Kevin Fu of the University of Massachusetts Amherst disclosed an attack which "would have switched off a heart defibrillator" adding "there are vulnerabilities [in medical devices] but there is a perceived lack of threats".
Similarly Barnaby Jack a researcher at security firm McAfee demonstrated an attack on an implantable insulin pump.
Some critics have questioned the need to implant the technology to investigate the issues "...it makes no difference if an RFID chip is injected under your skin or stitched into the lining of your jacket...". Gasson argued that many people with implants, including medical devices, consider them to be a part of their body and so it is evident that you cannot simply separate the human and the technology that easily - "actually having something implanted is extremely different to bench testing a piece of hardware because it adds the person and their experiences into the mix. It is seemingly difficult to get across the psychological impact involved in this type of deployment, and this is why I was so keen to test this on myself ... feeling technology to be a part of you is something you probably need to experience to understand".
References
External links
Academics of the University of Reading
Living people
Year of birth missing (living people) |
3993559 | https://en.wikipedia.org/wiki/List%20of%20University%20of%20California%2C%20Irvine%20people | List of University of California, Irvine people | This article lists noted individuals associated with the University of California, Irvine.
Students and alumni
The following are noted alumni and students of the University of California, Irvine, listed by the field(s) they have been noted for. When the information is known and available, degree and year are listed in parentheses.
Art
Michael Asher (B.F.A. 1966) – conceptual artist
Dan Bayles (M.F.A. 2007) – abstract artist
Chris Burden (M.F.A. 1971) – performance artist
Erica Cho (M.F.A) – artist
Garnet Hertz (M.F.A, PhD.) – artist, designer, academic
Tom Jancar, (B.A. 1974 and M.F.A. 1976) - contemporary art dealer Jancar Kuhlenschmidt Gallery and Jancar Gallery
Barbara T. Smith (M.F.A. 1971) – performance artist
James Turrell - attended, minimalist artist primarily known for his use of light and space installations
Film, television, and entertainment
Colet Abedi (B.A. English literature) – writer and television producer
Aras Baskauskas (B.A. 2002, philosophy, MBA 2004) – winner on TV reality show Survivor; CEO of TundraWear.com
Jim Berney (B.S. 1989, computer science) – Academy award-nominated technical director, The Lord of the Rings, The Matrix
Nazanin Boniadi (B.S. 2003, biological sciences) – actress, official spokeswoman for Amnesty International
Crista Flanagan (M.F.A. 2001) – actress, Mad Men
Steve Franks (B.A. 1991) – writer and television producer
Leslie Fu (B.S. 2014, biological sciences) - Twitch streamer
Bob Gunton (B.A. 1968) – actor, The Shawshank Redemption, 24, Desperate Housewives; Tony-nominated for Broadway roles in Evita and Sweeney Todd
Tyler Hoechlin (student athlete – baseball) – actor, star of Teen Wolf
Xanthe Huynh (attended)- voice actress, known for her work on The Familiar of Zero, Magical Girl Lyrical, K-On!, Love Live, Persona, and Fire Emblem
Carrie Ann Inaba (attended) – actress/dancer, a judge on ABC's Dancing with the Stars
Chris Kelly (B.A. 2005) – comedy writer (Saturday Night Live) and filmmaker (Other People); co-creator of The Other Two
Kelly Lin (attended, Economics and English & Comparative Literature) – Taiwanese actress, nominated for best actress for role in Mad Detective at 2007 Venice Film Festival
Jon Lovitz (B.A. 1979, Theater) – actor and comedian, Saturday Night Live
Andrea Lowell (attended, Biology) – model, Playboy, VH1's The Surreal Life #6
Beth Malone (B.A. 2000, Theater) - Tony award nominee for Best Actress in a musical
Tom Martin (B.A. 1987, Economics & Political Science) – TV writer whose credits include The Simpsons and Saturday Night Live
Joseph McGinty Nichol a.k.a. McG (B.S. 1990, Psychology) – film director (Charlie's Angels, Terminator Salvation), co-creator of TV series The O.C.
Joseph Andrew Mclean (English Literature, Politics, Screenwriting) – Scottish filmmaker, studied at UCI as an exchange student from the University of Glasgow
Jeff Meek (B.A. 1983, Drama) – TV actor, Raven, As the World Turns and Mortal Kombat: Conquest
Windell Middlebrooks (M.F.A. 2004, Drama) – actor, star of Body of Proof
Grant Nieporte (B.A. 1995, Social Sciences) – film writer, credits include 2008 Will Smith film Seven Pounds
Sophie Oda (B.F.A. 2012, Musical Theater) – guest actress on The Suite Life of Zack & Cody
Kelly Perine (M.F.A. Acting, 1994), television actor (Between Brothers, The Parent 'Hood, One on One)
Aurora Snow (attended, Theater Arts, Business) – pornographic actress
Phil Tippett (B.A., Fine Arts) – filmmaker, Star Wars, RoboCop and Jurassic Park
Brian Thompson (M.F.A. Acting, 1984) – actor
Thuy Trang (attended, Civil Engineering) – actress, Trini from Power Rangers
Byron Velvick (B.A. 1987, English) – star of The Bachelor, Season 6
Literature
Nevada Barr (M.F.A. 1978) – author of Anna Pigeon Mysteries
Michael A. Bellesiles (PhD 1986) – controversial historian and author of Arming America, The Origins of a National Gun Culture
Aimee Bender (M.F.A. 1998) – author of The Girl in the Flammable Skirt
David Benioff (M.F.A. 1999) – author of The 25th Hour, husband of actress Amanda Peet, co-creator of HBO's hit television series Game of Thrones
Chelsea Cain (B.A. 1994, Political Science) – fiction writer
Michael Chabon (M.F.A. 1987) – 2001 Pulitzer Prize-winning author for his novel The Amazing Adventures of Kavalier & Clay
Leonard Chang (M.F.A) – Korean-American writer of short stories and novels
Joshua Ferris (M.F.A.) – author of Then We Came to the End which was nominated for the National Book Award and won the PEN/Hemingway Award
Brian Flemming (B.A. 1998, English) – director and playwright
Richard Ford (M.F.A. 1970) – Pulitzer Prize-winning author in Fiction for Independence Day
Glen David Gold (M.F.A . 1998) – author of Carter Beats the Devil and Sunnyside
Yusef Komunyakaa (M.F.A. 1980) – Pulitzer Prize-winning poet for Neon Vernacular
T. Jefferson Parker (M.F.A. 1976) – fiction writer and author of Laguna Heat, Little Saigon, and Pacific Heat
Alice Sebold (M.F.A. 1998) – bestselling novelist, author of The Lovely Bones
Danzy Senna (M.F.A. 1996) – writer
Maria Helena Viramontes (M.F.A. 1994) – Chicana fiction writer
Peter Wild (M.F.A. 1969) – poet, historian, and Professor of English at the University of Arizona
Music
Joey Burns – frontman of Calexico
Clara Chung (B.A. 2009, Psychology) – guitarist and singer
Gregory Coleman (M.A. 2005, Fine Arts) – classical guitarist, recording artist, composer, arranger, educator
Coco Lee (attended) – Chinese pop star
Till Kahrs (B.A. 1979, Social Science) - recording artist, singer-songwriter, communication skills expert
Kevin Kwan Loucks (B.M. 2004, Piano Performance) – International Concert Pianist; President and Co-Founder of Chamber Music OC; Member of Classical Music Ensemble Trio Céleste
Jeffrey Mumford (B.A. 1977) – classical music composer
Aubrey O'Day (B.A. 2005, Political Science) – member of Danity Kane (Making the Band 3 contestant)
Savitree Suttichanond (B.A. 2007, International Studies) – Thai singer and actress, Academy Fantasia Season 5
Kaba Modern – Asian-American dance group established in 1992; 6 alumni dancers appeared on America's Best Dance Crew
Teal Wicks (B.A. 2005, Drama) – American singer and stage actress, best known for playing the role of Elphaba in the Broadway production of the musical Wicked
Vanness Wu – Taiwanese singer, actor, director, producer, member of F4 (band)
Members of Farside – hardcore punk band
Members of Thrice – hardcore/rock Band
Kei Akagi – toured as pianist for famous jazz musician and composer Miles Davis
Joseph Vincent – YouTube singer/songwriter
Members of Milo Greene – indie/pop band
Members of SLANDER – trap DJ duo
Baseball
Brady Anderson (attended, Economics) – former MLB outfielder for the Boston Red Sox and the Baltimore Orioles, three-time American League all-star
Dylan Axelrod (B.A. 2007, Social Ecology) – pitcher for the Chicago White Sox
Christian Bergman – pitcher for the Colorado Rockies
Doug Linton – former MLB pitcher
Ben Orloff, minor league baseball player
Bryan Petersen – outfielder for the Florida Marlins
Sean Tracey – former MLB pitcher for the Baltimore Orioles and Chicago White Sox
Gary Wheelock – former MLB pitcher for the California Angels and Seattle Mariners
Basketball
Scott Brooks (B.A. 1987) – former head coach of the NBA team Oklahoma City Thunder; point guard on the 1994 NBA Champion Houston Rockets, current head coach of the Washington Wizards
Steve Cleveland (B.A. 1976) – men's head basketball coach at Fresno State University
Kevin Magee (1959–2003) - basketball player
Tod Murphy (B.A. 1986) – former NBA player and third-round pick of the Seattle SuperSonics in the 1986 NBA draft; assistant coach for the UCI Men's basketball team
Tom Tolbert (transferred) – former NBA player and current color-commentator for ABC sports
Olympians
Peter Campbell (B.A. 1982) – won two silver medals (1984 and 1988) Olympics (water polo)
Jennifer Chandler – gold medalist in the 1976 Olympics (diving)
Gary Figueroa (B.A. 1980) – silver medalist in the 1984 Olympics (water polo)
Brad Alan Lewis (B.A. 1976) – gold medalist in the 1984 Olympics (rowing), author of Assault on Lake Casitas
Greg Louganis (B.A. 1983) – four-time Olympic gold medalist (diving)
Amber Neben (M.S. Biology) – U.S. National Road Race Champion, 2005 & 2006 Tour de l'Aude winner, 2008 World TT Champion, and placed 33rd in the road race at the 2008 Olympics
Mike Powell – world long jump record-holder; two-time Olympic silver medalist (track and field)
Steve Scott (B.A. 1978) – American record holder for the indoor mile (3:58.7); world record holder for the most sub-Four-minute miles (136)
Soccer
Carlos Aguilar – forward for the Rochester Rhinos of USL Professional Division
Cameron Dunn – defender for the Los Angeles Blues of the USL Professional Division
Brad Evans – midfielder for the Seattle Sounders of Major League Soccer and the United States men's national soccer team
Irving Garcia – forward for the New York Red Bulls of Major League Soccer
Anthony Hamilton (B.A. 2010, Literary Journalism) – forward for the Rochester Rhinos of the USL Professional Division
Miguel Ibarra - midfielder for Minnesota United of Major League Soccer
Cameron Iwasa - forward for Sporting Kansas City of Major League Soccer
Kenny Schoeni (B.A. 2006) – former goalkeeper for the Columbus Crew of Major League Soccer
David Sias (B.A. 2008, Economics) – former defender for the Austin Aztex of the USL Professional Division
Cami Privett (Majored in Sociology) - Former NWSL Soccer Player for the Houston Dash
Other athletics
David Baker (B.A. 1975) – commissioner of the Arena Football League, 1996 to present
Carl Cheffers – National Football League official. Head referee for Super Bowl LI and Super Bowl LV.
Shane del Rosario (B.A. Psychology) – professional mixed martial artist
Darren Fells (B.A. Sociology) – tight end for the Houston Texans. Played basketball at UCI and professionally in Europe and Latin America.
Jillian Kraus (born 1986; MBA), water polo player
Joe Lacob (B.S. Biological Sciences) – owner of NBA's Golden State Warriors
Business
Arnnon Geshuri – corporate human relations executive
Wright Massey (M.B.A. 1992) – CEO of Brand Architecture, Inc.
Betsy McLaughlin (B.A.) – CEO of Hot Topic, Inc.
Vince Steckler (B.A.) – CEO of Avast Software
Military
John P. Condon (Ph.D. 1976) - U.S. Marine Corps Major General and Aviator
Leon J. LaPorte (M.B.A. 1977) – retired United States Army General who served as Commander, United States Forces Korea until 2006
Laura Yeager, U.S. Army general, first woman to command an Army infantry division
Miscellaneous
Generosa Ammon (B.A. 1981) – widow of Ted Ammon
Khaldoun Baghdadi (B.A.) – Palestinian-American attorney
David J.R. Frakt (B.A. 1990, History) – lawyer, law professor, noted for his appointment to defend Guantanamo detainee Mohammed Jawad
Erin Gruwell (B.A. 1991) – high school teacher whose real-life story inspired the movie Freedom Writers
Michael Ramirez (B.S. 1984) – 1994 Pulitzer Prize for Editorial Cartooning in the Memphis Commercial Appeal; senior editor for Investor's Business Daily
Lisa Marie Scott (attended) – Playboy centerfold, February 1995
Politics and government
Ami Bera (B.S. 1987, Biological Sciences & M.D. 1991) – United States Congressman representing California's 7th Congressional District
Tim Donnelly (B.A. 1989) – California State Assemblyman representing the 59th assembly district
Jeremy Harris (M.S. Environmental Biology) – former mayor of Honolulu
Mark Keam (B.A. 1988, Political Science) – member of the Virginia House of Delegates representing the 35th district
Bill Leonard (B.A. 1969, History) – former California State Senator
Linda Newell (B.A.) – Colorado State Senator representing the 26th District
Janet Nguyen (B.A. 1998, Political Science) – member of the Orange County Board of Supervisors, first Vietnamese-American to hold county office in the United States
Geoffrey R. Pyatt (B.A. 1985) - U.S. Ambassador to Greece
Michael A. Rice (PhD 1987, Comparative Psychology) – Rhode Island State Representative representing the 35th District
Jose Solorio (B.S. 1992, Social Ecology) – California State Assemblyman, prior Councilmember of Ward 1 – City of Santa Ana
Audra Strickland (B.A. 1996, Political Science) – California State Assemblywoman representing the 37th assembly district
Van Tran (B.A. 1990, Political Science) – California State Assemblyman, first Vietnamese-American state legislator in the United States
Science and technology
Paul Chien (PhD 1971) – biologist known for research on the physiology and ecology of intertidal organisms
Deanna M. Church (PhD 1997) – bioinformatics and genomics researcher
Charles Falco (PhD 1974) – experimental physicist whose research resulted in the Hockney-Falco Thesis
Roy Fielding (PhD 2000) – Internet pioneer, creator of HTTP 1.1, co-founder of Apache Foundation
Efi Foufoula-Georgiou Prof Env. Engineer on staff
Bart Kosko (PhD 1987) – hybrid intelligent system expert
Lawrence L. Larmore (PhD 1986) – online algorithms researcher, faculty member at UC Riverside and UNLV
James D. McCaffrey (B.A. 1975) – software engineer and author; see combinatorial number system and factorial number system
Paul Mockapetris (PhD 1982, Computer Science) – Internet pioneer, co-inventor of the Domain Name System
Kathie L. Olsen (PhD 1979, Neuroscience) – chief operating officer of the National Science Foundation
Andrew P. Ordon (B.S. 1972) – plastic surgeon, host of The Doctors
Jim Whitehead, (PhD 2000) – originator of WebDAV, UCSC professor
Social Scientists
Noeline Alcorn - education researcher
Sara Diamond (B.A. 1980) – sociologist
Kevin Nadal (BA, 2000) - Professor of Psychology, Author, Media correspondent.
Faculty
Vartkess Ara Apkarian – Distinguished Professor of Chemistry and Director of Center for Chemistry at the Space-Time Limit
Francisco J. Ayala – 2001 National Medal of Science, 2010 Templeton Prize, Founding Director of the Bren Fellows Program, Professor of Ecology and Evolutionary Biology, and of Philosophy
Ricardo Asch – fertility doctor and fugitive, accused of stealing ova from women while a UC Irvine employee
Pierre Baldi – Chancellor's Professor of Computer Science and director of the Institute for Genomics and Bioinformatics
Lindon W. Barrett – Director of African Studies, professor and cultural theorist
Gregory Benford – physicist, science fiction author of the Galactic Center Saga
George W. Brown – Information scientist and dean of the business school
Jan Brueckner - Distinguished Professor of Economics
Ron Carlson – Professor of English, MFA Programs in Writing
Leo Chavez – anthropologist and author
Erwin Chemerinsky – founding dean of the UCI School of Law, lawyer, law professor, and United States constitutional law and civil procedure scholar
Robert Cohen – acting teacher and author
Rui de Figueiredo – Research Professor of Electrical Engineering and Computer Science, and Mathematics
Jacques Derrida (1986 – 2004 [his death]) – philosopher
Paul Dourish – Professor of Informatics
Nikil Dutt – Chancellor's Professor of Computer Science
David Eppstein – Professor of Computer Science
Walter M. Fitch – Professor of Molecular Evolution
Michael Franz – Chancellor's Professor of Computer Science
Matthew Foreman – Professor of Mathematics
Robert Garfias – musicologist, awarded the Order of the Rising Sun
Jean-Luc Gaudiot — professor at the Henry Samuel School of Engineering
Amy Gerstler – poet, Professor of English, winner of National Book Critics Circle Award
Michael T. Goodrich – Chancellor's Professor of Computer Science
Louis A. Gottschalk – neuroscientist, Professor Emeritus
Jutta Heckhausen – Professor of Psychological Science
Payam Heydari – Chancellor's Professor of Electrical Engineering and Computer Science
James D. Herbert – professor and chair of the Art History department
Dan Hirschberg – Professor of Computer Science
Hamid Jafarkhani – Professor of Electrical Engineering and Computer Science
Ramesh Jain – Bren Professor of Computer Science
Valerie Jenness – Professor of Criminology, Law & Society, Sociology, and Nursing Science
Wilson Ho – Donald Bren Professor of Physics and Chemistry and discoverer of scanning tunneling microscopy based inelastic electron tunneling spectroscopy
Murray Krieger – literary critic and theorist
Michelle Latiolais – Professor of English, MFA Programs in Writing
Elizabeth Loftus – psychologist, Distinguished Professor in the School of Social Ecology
R. Duncan Luce – cognitive psychologist, 2003 National Medal of Science, Distinguished Research Professor of Cognitive Science
George Marcus – Chancellor's Professor of Anthropology
Athina Markopoulou – Chancellor's Professor of Engineering
James McGaugh – Research Professor of Neurobiology and Behavior, founding Director of the Center for the Neurobiology of Learning and Memory
Donald McKayle – choreographer
Penelope Maddy – Distinguished Professor of Logic and Philosophy of Science and Mathematics, famous for her work in philosophy of mathematics
David B. Malament – Distinguished Professor of Logic and Philosophy of Science, best known for his work in the philosophy of physics.
J. Hillis Miller – literary critic
Peter Navarro-Professor of business, incumbent director of the White House National Trade Council
Bonnie Nardi – Professor of Informatics
David Neumark – Professor of Economics, expert on labor economics
Ngũgĩ wa Thiong'o – author of A Grain of Wheat, Distinguished Professor in the School of Humanities and director of the International Center for Writing and Translation
James Nowick – Professor of Chemistry
William H. Parker, Professor of Physics
Richard E. Pattis – author of the Karel programming language
Lyman W. Porter - dean of UC Irvine's Paul Merage School of Business from 1972 to 1983
Curt Pringle – mayor of Anaheim, former speaker of the California State Assembly
R. Radhakrishnan – Chancellor's Professor of English and Comparative Literature
Frederick Reines (faculty 1966–1998, deceased) – Nobel laureate, Physics 1995
Irwin Rose – Nobel laureate (Chemistry 2004)
Eric Rignot – Professor of Earth System Science
F. Sherwood Rowland – Nobel laureate (Chemistry 1995), Research Professor in Chemistry and Earth System Science
Donald G. Saari – Distinguished Professor of Mathematics and Economics
Dr. William Sears – Associate Clinical Professor of Pediatrics, author of the Sears Parenting Library
Patricia Seed – Professor of History
Barry Siegel – Pulitzer Prize winner
Brian Skyrms – philosophy of science expert, Distinguished Professor of social science
David A. Snow – Distinguished Professor of Sociology
Etel L. Solingen – Tierney Chair, Professor of Political Science, former President of the International Studies Association, 2012–2013, author of award-winning Nuclear Logics
George Sperling – cognitive psychologist, Distinguished Professor of Cognitive Science
Grover C. Stephens (faculty 1964–2003, deceased) – Professor and dean of Biological Sciences
Lee Swindlehurst – Professor of Electrical Engineering and Computer Science
Rein Taagepera (until 1991) – Estonian politician and political scientist
Edward O. Thorp – author (Beat the Dealer: A Winning Strategy for the Game of Twenty-One), professor of mathematics
Deborah Vandell, Founding Dean of the School of Education, expert on child care and after-school programs
Martin Wattenberg – political scientist
Douglas R. White – social anthropologist and network sociologist, author of Network Analysis and Ethnographic Problems
Kumar Wickramasinghe – Henry Samueli Endowed Chair and inventor of Kelvin Probe Force Microscopy and other microscopy techniques
Jon Wiener - historian
Geoffrey Wolff – co-director of a writing program
Jenny Y Yang – chemist
Staff and administrators
Daniel Aldrich - founding chancellor
Ralph J. Cicerone – fourth chancellor, former president of the National Academy of Sciences
Larry Coon – basketball writer
Michael V. Drake - fifth chancellor, president of Ohio State University
Jack Peltason - second chancellor, former president of the University of California
William Pereira -- original architect of the campus and surrounding city
Tom Jennings – creator of FidoNet
References
Irvine people |
22127041 | https://en.wikipedia.org/wiki/Chuck%20Versus%20the%20Predator | Chuck Versus the Predator | "Chuck Versus the Predator" is the seventeenth episode of the second season of Chuck. It originally aired on March 23, 2009. Chuck Bartowski reluctantly tells his handlers that he has been contacted by Orion, the mastermind behind the Intersect computer and the person who can erase the Intersect from his brain. When the team goes to retrieve the computer Orion sent to Chuck, they run into a Fulcrum agent named Vincent (Arnold Vosloo). After Orion's computer is brought back successfully, General Beckman (Bonita Friedericy) arrives in person to oversee the operation to locate Orion. Meanwhile, a conflict breaks out between the Burbank and Beverly Hills Buy More branches.
Plot
Main plot
Chuck Bartowski, John Casey and Sarah Walker return from an assignment involving plumbing. Chuck still longs to find Orion, the chief designer of the Intersect and the only one who could remove it, to return to his civilian life. As he stays up all night reviewing the chart behind the Tron poster (from the previous episode) and searching for Orion on his computer, he notices that the webcam activates. On the other end of the connection, an unknown man in Hong Kong types on a laptop. He then types on a wrist-mounted computer pad and leaves. Suddenly, a group of Fulcrum agents led by Vincent Smith (Arnold Vosloo) arrive, identifying the man as Orion. Suddenly, a missile arrives and destroys the agents.
Later, Orion sends Chuck a message on a computer, revealing his knowledge that Chuck is the Human Intersect. Orion wishes to meet with Chuck and sends him a computer. Chuck promptly informs Sarah, Casey, and General Beckman (Bonita Friedericy), who wants the computer since it is capable of breaking defense computers and hijacking weaponry. Meanwhile, Lester Patel intercepts the computer at the Buy More, believing it to be the new Roark 7 gaming laptop.
At the Buy More, Lester, Morgan Grimes and Jeff Barnes take the advanced laptop. When opened, Orion asks for identification. When they reply that they cannot, Orion assumes that they are in danger and sends a Predator drone from Edwards Air Force Base. Assuming it is a simulation game, they target their own location. Casey detects the drone and prepares to evacuate the building, but the group in the bathroom instead target the Beverly Hills Buy More (See "Buy More"). Chuck realizes that his coworkers have intercepted the laptop and orders Morgan to stop playing with the computer. He finds them in the men's room and enters a target sequence cancel code to abort the aerial support mission. Big Mike walks in on the commotion, assumes that the laptop is the Roark 7, takes it and places it in a safe, assigning Emmett Milbarge (Tony Hale) to guard it overnight. Meanwhile, Vincent reports his discoveries to the Ring Elders and persuades them to let him infiltrate the Burbank Buy More.
As Emmett guards the Buy More, Casey, Chuck and Sarah sneak in to retrieve the laptop. Chuck orders Casey and Sarah to cause a distraction while he inputs the combination to Big Mike's safe, telling them not to use guns or violence. As the agents get into place, Emmett walks past Lester and Jeff, who have their own plan to steal the laptop. As Jeff attempts to throw his voice to distract Emmett, Emmett seeks the source, leading everyone to scatter. As Chuck enters the office, Lester mistakenly scolds Casey (believing him to be Jeff) and Jeff mistakenly talks to Sarah (believing her to be Lester). Suddenly, Vincent enters through the roof to steal the laptop. After Chuck unlocks the safe, he turns around and flashes on a gun-wielding Vincent.
When Jeff and Lester realize that there are other robbers in the store, they flee, only to get maced by Emmett. As Emmett gloats, Casey incapacitates him with an elbow strike. As Sarah joins Casey, Chuck comes out held at gunpoint by Vincent, who demands that they drop their guns. When Chuck explains why his plan excludes guns, Vincent explains that it would be unprofessional not to shoot someone. Casey produces a gun and shoots Vincent first. Before they can question Vincent, he ingests tetrodotoxin.
Casey dumps Vincent's body in Castle. Beckman stops Chuck from opening the laptop, suspecting Orion of being in Fulcrum and taking over the operation personally, warning Chuck that he may have to return with her if Fulcrum knows his identity. When Orion again contacts Chuck and calls his cell phone, Chuck explains that the computer is locked up in a government facility and Orion is suspected of being connected to Fulcrum. Orion tells Chuck to look at the computer and Chuck flashes on a symbol. When Chuck asks how Orion knew that was in the Intersect, he replies that he put it there, fully convincing Chuck.
Orion shows Chuck surveillance of a meeting of Beckman, Sarah and Casey, where Beckman explains that Chuck is too important to national security to allow Orion to remove the Intersect, hence the lie that Orion is connected to Fulcrum. After seeing Sarah agree, Chuck flees, leaving his watch behind. Chuck reaches the laptop, and Orion confirms that he can remove the Intersect. As they arrange a meeting, Vincent crawls out of his body bag and kidnaps Chuck, having been trained to survive tetrodotoxin's near-death state. Meanwhile, Sarah and Casey realize that Chuck has left his apartment and notice the address of his meeting with Orion on a surveillance video.
At the rendezvous, Vincent threatens to kill Chuck's family, forcing Chuck to tell Orion that he is safe. Suddenly, Fulcrum agents surround and capture Orion. As Orion is led to a helicopter on the roof, Sarah and Casey arrive and shoot Vincent. As the helicopter takes off, Chuck opens the laptop and finds that Orion has activated an "Emergency Protocol". Suddenly, the Predator drone arrives and destroys the helicopter, with Orion inside.
After Beckman admits that she does not want the Intersect removed from Chuck's head, Chuck returns home to find a packet containing cards depicting the Intersect and a disc. On the disc, a video of Orion instructs Chuck to study the cards, then self-destructs. Beckman questions Casey about Sarah's reliability and feelings towards Chuck, requesting everything Casey knows about them. Meanwhile, Chuck reviews the cards, determined to return to his old life.
Buy More
The Buy More employees discover their store toilet papered by the employees of the Beverly Hills branch, led by Barclay (Matt Winston). Big Mike reveals that they are jealous that the Burbank branch is getting the new Roark Instruments laptop first. Lester later intercepts Orion's laptop for Chuck, believing it to be the Roark 7. From Jeff's "office" in the men's bathroom, Lester, Jeff, and Morgan command a Predator drone and nearly bomb themselves, believing it to be a simulation game. They then command it to bomb the Beverly Hills branch, but Chuck arrives and cancels the mission just in time. Big Mike walks in on the commotion, presumes that the laptop is the Roark 7, takes it, places it in a safe, and assigns Emmett to guard it overnight.
Jeff and Lester later sneak into the store to steal the laptop, becoming separated. Realizing that they are actually talking to Sarah and Casey, they flee the store. They are then maced by Emmett, who is then incapacitated by Casey. Big Mike and Morgan arrive, and Jeff and Lester lie about overhearing the break-in. Emmett thinks that an entire crew has attacked him, leading everyone to suspect the Beverly Hills branch.
They enter the store to retaliate, and Emmett accidentally causes a chain reaction of shelves tipping over. As they celebrate their victory the next day, the Beverly Hills group enters and accuses them. Big Mike convinces them that involving the police is not "the Buy More way" and the group exits. Mike thanks Morgan and informs him of rumors of a store closing, thus reassuring his de facto stepson that he will not let that happen.
Production
Flashes
Chuck flashes on Vincent at the Buy More
Chuck flashes on the symbol that Orion puts on his computer screen
Cultural references
There are a number of allusions to The Matrix in this episode. Orion's communication with Chuck via computer screens is similar to when Morpheus first contacts Neo in his bedroom, such as when Chuck's search of the Internet is interrupted by Orion. There are some musical clues, such as when Vincent is about to shoot Chuck, and audio clues, as when Chuck opens the laptop sent to him by Orion, that are very similar to the music and sounds used in The Matrix. Chuck's phone conversation with Orion and his plan to escape the agents guarding the area is reminiscent of Neo's conversation with Morpheus at his work. The comic book Chuck appears to read at the end of this episode to hide his Intersect research is titled "Ex Machina" - which is Latin for "out of the machine".
Also, when Big Mike confronts the Beverly Hills Buy More staff after they had broken into the Beverly Hills store, he mimics Sean Connery's speech in the Untouchables.
Emmett compares himself to various gunslingers portrayed in film, including Shane, Matt Dillon and Clint Eastwood.
Orion saying "this disc will self-destruct in five seconds" alludes to the Mission: Impossible franchise, especially the original television series.
When Emmet misses a cardboard cutout of Barclay in the Beverly Hills Buy More he says, "all right, one more time...with feeling," before swinging again. It's a line spoken by Bruce Lee in Enter the Dragon.
Critical response
"Chuck Versus the Predator" received positive reviews from critics. Steve Heisler of The A.V. Club gave the episode an A, though he expressed disappointment that plot points were resolved so quickly. Eric Goldman of IGN gave the episode a 9 out of 10, praising the burglary scene. "From Jeff and Lester mistaking Sarah and Casey for, well, Jeff and Lester; to Emmett spraying mace in Jeff and Lester's face; to Casey revealing he still had a gun on him, despite Chuck telling him not to, this sequence was expertly constructed."
The episode drew 6.156 million viewers.
References
External links
Predator
2009 American television episodes |
238329 | https://en.wikipedia.org/wiki/DotGNU | DotGNU | DotGNU is a decommissioned part of the GNU Project that aims to provide a free software replacement for Microsoft's .NET Framework by Free Software Foundation. Other goals of the project are better support for non-Windows platforms and support for more processors.
The main goal of the DotGNU project code base was to provide a class library that is 100% Common Language Specification (CLS) compliant.
Main development projects
Portable.NET
DotGNU Portable.NET, an implementation of the ECMA-335 Common Language Infrastructure (CLI), includes software to compile and run Visual Basic .NET, C#, and C applications that use the .NET base class libraries, XML, and Windows Forms. Portable.NET claims to support various instruction set architectures including x86, PPC, ARM, and SPARC.
DGEE
DotGNU Execution Environment (DGEE) is a web service server.
libJIT
The libJIT just-in-time compilation library is a library for development of advanced just-in-time compilation in virtual machine implementations, dynamic programming languages, and scripting languages. It implements an intermediate representation based on three-address code, in which variables are kept in static single assignment form.
libJIT has also seen some use in other open source projects, including GNU Emacs ILDJIT and HornetsEye .
Framework architecture
The Portable .NET class library seeks to provide facilities for application development. These are primarily written in C#, but because of the Common Language Specification they can be used by any .NET language. Like .NET, the class library is structured into Namespaces and Assemblies. It has additional top-level namespaces including Accessibility and DotGNU. In a typical operation, the Portable .NET compiler generates a Common Language Specification (CLS) image, as specified in chapter 6 of ECMA-335, and the Portable .NET runtime takes this image and runs it.
Free software
DotGNU points out that it is Free Software, and it sets out to ensure that all aspects of DotGNU minimize dependence on proprietary components, such as calls to Microsoft Windows' GUI code. DotGNU was one of the High Priority Free Software Projects from till .
DotGNU and Microsoft's patents
DotGNU's implementation of those components of the .NET stack not submitted to the ECMA for standardization has been the source of patent violation concerns for much of the life of the project. In particular, discussion has taken place about whether Microsoft could destroy the DotGNU project through patent suits.
The base technologies submitted to the ECMA may be non-problematic. The concerns primarily relate to technologies developed by Microsoft on top of the .NET Framework, such as ASP.NET, ADO.NET, and Windows Forms (see Non standardized namespaces), i.e. parts composing DotGNU's Windows compatibility stack. These technologies are today not fully implemented in DotGNU and are not required for developing DotGNU-applications.
In 2009, Microsoft released .NET Micro Framework under Apache License, Version 2.0, which includes a patent grant. However, the .NET Micro Framework is a reimplementation of the CLR and limited subset of the base class libraries meant for use on embedded devices. Additionally, the patent grant in the Apache License would have protected only contributors and users of the .NET Micro Framework—not users and developers of alternative implementations such as DotGNU or Mono.
In 2014, Microsoft released Roslyn, the next generation official Microsoft C# compiler, under the Apache License. Later that year, Microsoft announced a "reboot" of the official .NET Framework. The framework would be based on .NET Core, including the official runtime and standard libraries released under the MIT License and a patent grant explicitly protecting recipients from Microsoft-owned patents regarding .NET Core.
See also
Comparison of application virtual machines
Portable.NET – A portable version of DotGNU toolchain and runtime
Mono – A popular free software implementation of Microsoft's .NET
Common Language Runtime
Shared Source Common Language Infrastructure – Microsoft's shared source implementation of .NET, previously codenamed Rotor
References
External links
Project homepage
Article '2001 – The Year When DotGNU Was Born'
A 2003 interview with Norbert Bollow of DotGNU
.NET implementations
Computing platforms
GNU Project software |
2913306 | https://en.wikipedia.org/wiki/Mary%20Lee%20Woods | Mary Lee Woods | Mary Lee Berners-Lee (née Woods; 12 March 1924 – 29 November 2017) was an English mathematician and computer scientist who worked in a team that developed programs in the Department of Computer Science, University of Manchester Mark 1, Ferranti Mark 1 and Mark 1 Star computers. She was the mother of Sir Tim Berners-Lee, the inventor of the World Wide Web and Mike Berners-Lee, an English researcher and writer on greenhouse gases.
Early life and education
Woods was born on 12 March 1924, in Hall Green, Birmingham to Ida (née Burrows) and Bertie Woods. Both her parents were teachers. She had a brother who served in the Royal Air Force during World War II and was killed in action. She attended Yardley Grammar School in Yardley, Birmingham, where she developed an aptitude for mathematics. From 1942 to 1944, she took a wartime compressed two-year degree course in mathematics at the University of Birmingham. She then worked for the Telecommunications Research Establishment at Malvern until 1946 when she returned to take the third year of her degree. After completing her degree she was offered a fellowship by Richard van der Riet Woolley to work at Mount Stromlo Observatory in Canberra, Australia, from 1947 to 1951 when she joined Ferranti in Manchester as a computer programmer.
Ferranti computer programming group
On joining the UK and electrical engineering and equipment firm, Ferranti, she started working in a group led by Dr John Makepeace Bennett.
She worked on both the Ferranti Mark 1 and the Ferranti Mark 1 Star computers. The programs for these computers were written in machine code, and there was plenty of room for error because every bit had to be right. The machines used serial 40-bit arithmetic (with a double length accumulator), which meant that there were considerable difficulties in scaling the variables in the program to maintain adequate arithmetic precision.
The Ferranti programming team members found it useful to commit the following sequence of characters to memory, which represented the numbers 0–31 in the International Telegraph Alphabet No. 1, which was a 5-bit binary code of the paper tape that was used for input and output:
Another difficulty of programming the Ferranti Mark 1 computers was the two-level storage of the computers. There were eight pages of Williams cathode ray tube (CRT) random access memory as the fast primary store, and 512 pages of the secondary store on a magnetic drum. Each page consisted of thirty-two 40-bit words, which appeared as sixty-four 20-bit lines on the CRTs. The programmer had to control all transfers between electronic and magnetic storage, and the transfers were slow and had to be reduced to a minimum. For programs dealing with large chunks of data, such as matrices, partitioning the data into page-sized chunks could be troublesome.
The Ferranti Mark 1 computer worked in integer arithmetic, and the engineers built the computer to display the lines of data on the CRTs with the most significant bit on the right due to their background in radar. This could be argued as the logically sensible choice, but was changed to the more conventional system of the most significant bit on the left for the Mark 1 Star. The Mark 1 Star worked with both fractions and integers. The Baudot teleprinter code was also abandoned for one that was in the following order:
Program errors for the Ferranti Mark 1 computers were difficult to find. Programmers would sit at the computer control desk and watch the computer perform one instruction at a time in order to see where unintended events occurred. However, computer time became more and more valuable, so Dr Bennett suggested that Woods write a diagnostic program to print out the contents of the accumulator and particular store lines at specific points in the program so that error diagnosis could take place away from the computer. The challenge of her routine, 'Stopandprint', was that it had to monitor the program under diagnosis without interfering with it, and the limited space in the fast store made this difficult. Along with Bennet and Dr D.G. Prinz, Woods was involved in writing interpretive subroutines that were used by the Ferranti group.
Errors with the programs were one problem, but errors caused by the computer were another. The computer frequently misread the binary digits it was given. The engineers thought the mathematicians could compensate for this by programming arithmetic checks, and the mathematicians would too readily assume that a wrong program result was due to a computer error when it was due to a program error. This caused inevitable friction between the mathematicians and the engineers. At the centre of this was a program that Woods had written for inverting a matrix to solve 40 simultaneous equations, which was a large number for the time. The long rows of data required by this calculation took the computer too long to process without an error. For one dispute Woods went to Tom Kilburn, who was second only to Professor Sir Frederic Calland Williams in the engineering department. Kilburn was polite but did not argue, and she felt he was ignoring her complaint. However, 50 years later when she asked him about the exchange, he said that he had not argued "because [he] knew [she was] right."
While at Ferranti, Woods discovered that the women in her department were getting less pay than the men. She presented the case to the personnel department and was able to convince them to grant equal pay and rights for women.
Cottage industry programming
Woods left Ferranti in 1955, when her first child was born. She continued to get involved in smaller programming projects, that she termed "cottage industry programming," so that she could complete jobs from home. Most notably she did some work with the London Transport Executive, to develop a simulation for bus routes that could prevent hold ups and bus bunching. She also developed a program for the RAF at Boscombe Down to track weather balloons and translate their readings. Then she came out of retirement in 1963 to work for a London-based company called K and H. While at K and H she wrote programming manuals until she retired in 1987.
Personal life
In 1954 she married Conway Berners-Lee whom she met while working in the Ferranti team, and together they had four children; Timothy (Tim), Peter, Helen and Michael (Mike). Their eldest son, Sir Tim Berners-Lee is the inventor of the World Wide Web, and their youngest son Mike is an academic.
After a period devoted to bringing up children, she became a schoolteacher of mathematics, and then a programmer using BASIC, Fortran and other languages before retiring in 1987.
She died on 29 November 2017, aged 93.
References
1924 births
2017 deaths
20th-century British mathematicians
21st-century British mathematicians
Alumni of the University of Birmingham
British computer scientists
Ferranti
People from Birmingham, West Midlands
People associated with the Department of Computer Science, University of Manchester
British women computer scientists
British women mathematicians
20th-century women mathematicians
21st-century women mathematicians |
11005092 | https://en.wikipedia.org/wiki/Don%20Maestri | Don Maestri | Donald D. Maestri Jr. (born October 25, 1946) is an American college basketball coach who was the head men's basketball coach at Troy University from 1982 to 2013. Prior to accepting this position, Maestri was an assistant coach at Mississippi State University from 1979 to 1980 and at the University of Alabama from 1980 to 1982. Maestri coached the Trojans to a record of 500–404, one NCAA Basketball Tournament, five regular season conference titles, and one conference tournament title over the course of 26 seasons at Troy. He has been named coach of the year in the East Coast Conference (1994), the Summit League (1997, then known as the Mid-Continent Conference), Atlantic Sun Conference (2000 and 2004) and the Sun Belt Conference (2009)
Maestri is famous for his run and gun style of basketball, which has led the Trojans to lead Division I NCAA basketball in three-pointers per game three consecutive seasons, from 2003 to 2006. He also coached Troy to a 258–141 win over the DeVry Institute of Atlanta on January 12, 1992, which is the highest scoring basketball game in NCAA history. Ironically, his emphasis on defensive pressure and a lock-down style of basketball pushed the Trojans to two Division II Final Fours in six years.
Five different conferences have called Maestri its Coach of the Year, tying him with West Virginia's Bob Huggins for the most among active coaches.
He earned his 500th career win against Florida Atlantic in the Sun Belt Conference Tournament in 2013. Maestri retired from the program on March 9, 2013.
Early life and education
Maestri grew up in New Orleans and graduated from De La Salle High School in 1964. After high school, Maestri attended the University of Southern Mississippi in Hattiesburg, Mississippi, where he graduated in 1968 with a double major B.S. in mathematics and physical education. After graduating from Southern Mississippi, Maestri stayed in Hattiesburg to work as a math and physical education teacher at Beeson Academy in the 1968–69 school year.
Coaching career
High school and college assistant (1970–1982)
Maestri began his coaching career in the fall of 1970 at Holy Cross High School in his hometown of New Orleans. In 10 seasons as coach, he posted a 211–99 record, leading Holy Cross to a state runner-up finish in the top classification in the Louisiana high school ranks in 1974. This 1974 team had a final record of 35–6.
His 1976 team finished with a 32–3 record and was ranked 11th in the nation.
Maestri's teams also finished 1st or 2nd place six times in the Catholic League.
Maestri spent the 1979–80 season as an assistant coach for Jim Hatfield at Mississippi State. During his one year with the program, the Bulldogs finished with a 13–14 record and tied for 6th in the Southeastern Conference.
Maestri spent two seasons at the University of Alabama on Winfrey Sanderson's staff (1980–82). During his time with the Crimson Tide, Alabama participated in the NIT and NCAA tournaments.
Alabama finished 24–7 in 1982, winning the Southeastern Conference Tournament and advancing to the NCAA East Regional.
Troy (1982–2013)
Maestri was named the head coach of the Troy University basketball team in 1982. When he took over the reins of the Troy basketball program, the Trojans had not posted a winning season in the previous five seasons.
Maestri quickly turned the program into a perennial Division II powerhouse. He led the Trojans to a Gulf South Conference title in the 1990–91 season, where they received an invitation to the NCAA Tournament, defeating Florida Southern 78–73 in the 1st Round before falling to North Alabama 86–93 in the 2nd round. He was named Gulf South Coach of the Year for his efforts.
The head coach led Troy State to five NCAA Tournament appearances in 1988, 1990, 1991, 1992 and 1993. Maestri finished with an overall record of 237–131 in Division II.
During the 1987–88 season, the Trojans finished with a 24–10, winning the Gulf South Conference title. Troy State made it to the Elite Eight of the NCAA Tournament before falling 77–72 to Alaska–Anchorage.
Five seasons later, in 1992–93, Maestri helped the Trojans to a 27–5 record and led them to the NCAA Tournament Finals, only to fall to Cal State Bakersfield 85–72. Maestri was named Southeast Region Coach of the Year by the NCAA following his team's brilliant season.
During Troy's first season at the Division I level, 1993–94, Maestri's run-and-gun style shot the Trojans to a conference title, going undefeated in the East Coast Conference and winning the conference title. Troy also gained national recognition by leading the nation in three-pointers made per game while averaging 97.6 points per contest. For his efforts, Maestri was named East Coast Conference Coach of the Year.
In 1995, Troy State left the East Coast Conference to join the Mid-Continent Conference (now the Summit League).
During the 1996–97 season, the Trojans were back in contention in the Mid-Continent Conference, upsetting Sweet 16 participant Valparaiso, 72–69, on the road in overtime to cap a 17–10 record. Troy claimed third place in the conference that year, with Maestri earning Coach of the Year honors again.
The Trojans left the Mid-Continent Conference to join the Atlantic Sun Conference in 1998. Maestri's teams struggled mightily their first two seasons in the Atlantic Sun in 1998 and 1999, but quickly turned their fortunes around the next season. In 2000, Maestri brought the Trojans their first conference title in six years and their first Division I conference title. Though the Trojans won the regular season title that year, they did not win the conference tournament, thus keeping them from getting in the NCAA Tournament.
Two seasons later, in the 2001–02 season, Maestri was once again able to lead his team to another Atlantic Sun regular season title. Once again though, his Trojans failed to win the conference tournament and were once again left out of the NCAA Tournament.
In the 2002–03, Maestri recorded his best season ever in Division I. He coached his team to a 26–6 record and they won the Atlantic Sun regular season and conference tournament titles. During the season, the team defeated a Southeastern Conference for the first time ever, defeating Arkansas 74–66. The Trojans received their first ever invitation to the NCAA Tournament as a Division I program. They fell to Xavier 59–71 in the 1st round.
The very next season, 2003–04, Maestri once again coached Troy to an Atlantic Sun regular season title for the third consecutive time and finished the season with an 18–2 conference record and a 24–7 record overall. To this day the 18 league victories is the highest single-season total for any Atlantic Sun school. The veteran coach earned A-Sun Coach of the Year honors. Maestri's Trojans received an invitation to the NIT, where they would be defeated in the first round by Niagara, 83–87. The Trojans finished the year second in Division I in scoring at 84.6 points per game, and ninth in scoring margin, winning by an average of 12.0 points per game. Troy also led all of college basketball in three-point field goals made with 346.
In 2005, Troy joined the Sun Belt Conference. Maestri's teams did not finish with a winning record from the 2004–05 season to the 2007–08 season.
Maestri was finally able to coach his team to winning record once again during the 2008–09 season. Troy finished with a 19–13 record and received an invitation to the CBI tournament, where they were defeated by the College of Charleston in the 1st round, 91–93. Maestri was awarded the Sun Belt Coach of the Year and was also named Coach of the Year by highly respected CollegeInsider.com following the successful campaign.
During Maestri's 2009–10 campaign, the Trojans finished with a 20–13 record and recorded the programs first-ever win over in-state opponent Auburn, upsetting the Tigers in Beard–Eaves Coliseum by a score of 81–77. His team went on to win the Sun Belt regular season title and compete in the NIT Tournament.
He retired from Troy after the 2013 season, finishing with a 501–403 overall record as head coach at Troy.
Later career (2016–2018)
Three years after Maestri's retirement from Troy, he joined the Texas A&M Aggies men's basketball staff in 2016 as a special assistant to head coach Billy Kennedy, whom Maestri had mentored since Kennedy was in the eighth grade at Holy Cross High School in Louisiana.
Head coaching record
References
1946 births
Living people
Alabama Crimson Tide men's basketball coaches
American men's basketball coaches
Basketball coaches from Louisiana
College men's basketball head coaches in the United States
High school basketball coaches in Louisiana
Mississippi State Bulldogs men's basketball coaches
Schoolteachers from Louisiana
Sportspeople from New Orleans
Texas A&M Aggies men's basketball coaches
Troy Trojans men's basketball coaches
University of Southern Mississippi alumni |
3846786 | https://en.wikipedia.org/wiki/Section%20summary%20of%20the%20Patriot%20Act%2C%20Title%20II | Section summary of the Patriot Act, Title II | The following is a section summary of the USA PATRIOT Act, Title II. The USA PATRIOT Act was passed by the United States Congress in 2001 as a response to the September 11, 2001 attacks. Title II: Enhanced Surveillance Procedures gave increased powers of surveillance to various government agencies and bodies. This title has 25 sections, with one of the sections (section 224) containing a sunset clause which sets an expiration date, 31 December 2005, for most of the title's provisions. On 22 December 2005, the sunset clause expiration date was extended to 3 February 2006.
Title II contains many of the most contentious provisions of the act. Supporters of the Patriot Act claim that these provisions are necessary in fighting the War on Terrorism, while its detractors argue that many of the sections of Title II infringe upon individual and civil rights.
The sections of Title II amend the Foreign Intelligence Surveillance Act of 1978 and its provisions in 18 U.S.C., dealing with "Crimes and Criminal Procedure". It also amends the Electronic Communications Privacy Act of 1986. In general, the Title expands federal agencies' powers in intercepting, sharing, and using private telecommunications, especially electronic communications, along with a focus on criminal investigations by updating the rules that govern computer crime investigations. It also sets out procedures and limitations for individuals who feel their rights have been violated to seek redress, including against the United States government. However, it also includes a section that deals with trade sanctions against countries whose government supports terrorism, which is not directly related to surveillance issues.
Sections 201 & 202: Intercepting communications
Two sections dealt with the interception of communications by the United States government.
Section 201 is titled Authority to intercept wire, oral, and electronic communications relating to terrorism. This section amended (Authorization for interception of wire, oral, or electronic communications) of the United States Code. This section allows (under certain specific conditions) the United States Attorney General (or some of his subordinates) to authorize a Federal judge to make an order authorizing or approving the interception of wire or oral communications by the Federal Bureau of Investigation (FBI), or another relevant U.S. Federal agency.
The Attorney General's subordinates who can use Section 201 are: the Deputy Attorney General, the Associate Attorney General, any Assistant Attorney General, any acting Assistant Attorney General, any Deputy Assistant Attorney General or acting Deputy Assistant Attorney General in the Criminal Division who is specially designated by the Attorney General.
The amendment added a further condition which allowed an interception order to be carried out. The interception order may now be made if a criminal violation is made with respect to terrorism (defined by ):
the use of weapons of mass destruction (defined by ), or
providing financial aid to facilitate acts of terrorism (defined by ), or
providing material support to terrorists (defined by ), or
Providing material support or resources to designated foreign terrorist groups (defined by ).
Note: the legislation states that title 18, section 2516(1), paragraph (p) of the United States Code was redesignated (moved) to become paragraph (q). This paragraph had been previously redesignated by two other pieces of legislation: the Antiterrorism and Effective Death Penalty Act of 1996 and by the Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (see section 201(3)).
Section 202 is titled Authority to intercept wire, oral, and electronic communications relating to computer fraud and abuse offenses, and amended the United States Code to include computer fraud and abuse in the list of reasons why an interception order may be granted.
Section 203: Authority to share criminal investigative information
Section 203 (Authority to share criminal investigation information) modified the Federal Rules of Criminal Procedure with respect to disclosure of information before the grand jury (Rule 6(e)). Section 203(a) allowed the disclosure of matters in deliberation by the grand jury, which are normally otherwise prohibited, if:
a court orders it (before or during a judicial proceeding),
a court finds that there are grounds for a motion to dismiss an indictment because of matters before the Grand Jury,
if the matters in deliberation are made by an attorney for the government to another Federal grand jury,
an attorney for the government requests that matters before the grand jury may reveal a violation of State criminal law,
the matters involve foreign intelligence or counterintelligence or foreign intelligence information. Foreign intelligence and counterintelligence was defined in section 3 of the National Security Act of 1947, and "foreign intelligence information" was further defined in the amendment as information about:
an actual or potential attack or other grave hostile acts of a foreign power or an agent of a foreign power;
sabotage or international terrorism by a foreign power or an agent of a foreign power; or
clandestine intelligence activities by an intelligence service or network of a foreign power or by an agent of foreign power; or
information about a foreign power or foreign territory that relates to the national defense or the security of the United States or the conduct of the foreign affairs of the United States.'.
information about non-U.S. and U.S. citizens
203(a) gave the court the power to order a time within which information may be disclosed, and specified when a government agency may use information disclosed about a foreign power. The rules of criminal procedure now state that "within a reasonable time after such disclosure, an attorney for the government shall file under seal a notice with the court stating the fact that such information was disclosed and the departments, agencies, or entities to which the disclosure was made."
Section 203(b) modified , which details who is allowed to learn the results of a communications interception, to allow any investigative or law enforcement officer, or attorney for the Government to divulge foreign intelligence, counterintelligence or foreign intelligence information to a variety of Federal officials. Specifically, any official who has obtained knowledge of the contents of any wire, oral, or electronic communication, or evidence derived from this could divulge this information to any Federal law enforcement, intelligence, protective, immigration, national defense, or national security official. The definition of "foreign intelligence" was the same as section 203(a), with the same ability to define "foreign intelligence" to be intelligence of a non-U.S. and U.S. citizen. The information received must only be used as necessary in the conduct of the official's official duties.
The definition of "foreign intelligence information" is defined again in Section 203(d).
Section 203(c) specified that the Attorney General must establish procedures for the disclosure of information due to (see above), for those people who are defined as U.S. citizens.
Section 204: Limitations on communication interceptions
Section 204 (Clarification of intelligence exceptions from limitations on interception and disclosure of wire, oral, and electronic communication) removed restrictions from the acquisition of foreign intelligence information from international or foreign communications. It was also clarified that the Foreign Intelligence Surveillance Act of 1978 should not only be the sole means of electronic surveillance for just oral and wire intercepts, but should also include electronic communication.
Section 205: Employment of translators by the FBI
Under section 205 (Employment of translators by the Federal Bureau of Investigation), the Director of the Federal Bureau of Investigation is now allowed to employ translators to support counterterrorism investigations and operations without regard to applicable Federal personnel requirements and limitations. However, he must report to the House Judiciary Committee and Senate Judiciary Committee the number of translators employed and any legal reasons why he cannot employ translators from federal, state, or local agencies.
Section 206: Roving surveillance authority
The Foreign Intelligence Surveillance Act of 1978 allows an applicant access to all information, facilities, or technical assistance necessary to perform electronic surveillance on a particular target. The assistance given must protect the secrecy of and cause as little disruption to the ongoing surveillance effort as possible. The direction could be made at the request of the applicant of the surveillance order, by a common carrier, landlord, custodian or other specified person. Section 206 (Roving surveillance authority under the Foreign Intelligence Surveillance Act of 1978) amended this to add:
or in circumstances where the Court finds that the actions of the target of the application may have the effect of thwarting the identification of a particular person.
This allows intelligence agencies to undertake "roving" surveillance: they do not have to specify the exact facility or location where their surveillance will be done. Roving surveillance was already specified for criminal investigations under , and section 206 brought the ability of intelligence agencies to undertake such roving surveillance into line with such criminal investigations. However, the section was not without controversy, as James X. Dempsey, the Executive Director of the Center for Democracy & Technology, argued that a few months after the Patriot Act was passed the Intelligence Authorization Act was also passed that had the unintended effect of seeming to authorize "John Doe" roving taps — FISA orders that identify neither the target nor the location of the interception (see The Patriot Debates, James X. Dempsey debates Paul Rosenzweig on section 206).
Section 207: Duration of FISA surveillance on agents of a foreign power
Previously FISA only defined the duration of a surveillance order against a foreign power (defined in ) . This was amended by section 207 (Duration of FISA surveillance of non-United States persons who are agents of a foreign power) to allow surveillance of agents of a foreign power (as defined in section ) for a maximum of 90 days. Section 304(d)(1) was also amended to extend orders for physical searches from 45 days to 90 days, and orders for physical searches against agents of a foreign power are allowed for a maximum of 120 days. The act also clarified that extensions for surveillance could be granted for a maximum of a year against agents of a foreign power.
Section 208: Designation of judges
Section 103(A) of FISA was amended by Section 208 (Designation of judges) of the Patriot Act to increase the number of federal district court judges who must now review surveillance orders from seven to 11. Of these, three of the judges must live within of the District of Columbia.
Section 209: Seizure of voice-mail messages pursuant to warrants
Section 209 (Seizure of voice-mail messages pursuant to warrants) removed the text "any electronic storage of such communication" from title 18, section 2510 of the United States Code. Before this was struck from the Code, the U.S. government needed to apply for a title III wiretap order before they could open voice-mails; however, now the government only need apply for an ordinary search. Section 2703, which specifies when a "provider of electronic communication services" must disclose the contents of stored communications, was also amended to allow such a provider to be compelled to disclose the contents via a search warrant, and not a wiretap order. According to Vermont senator Patrick Leahy, this was done to "harmonizing the rules applicable to stored voice and non-voice (e.g., e-mail) communications".
Section 210 & 211: Scope of subpoenas for records of electronic communications
The U.S. Code specifies when the U.S. government may require a provider of an electronic communication service to hand over communication records. It specifies what that provider must disclose to the government, and was amended by section 210 (Scope of subpoenas for records of electronic communications) to include records of session times and durations of electronic communication as well as any identifying numbers or addresses of the equipment that was being used, even if this may only be temporary. For instance, this would include temporarily assigned IP addresses, such as those established by DHCP.
Section 211 (Clarification of scope) further clarified the scope of such orders. (Section 631 of the Communications Act of 1934) deals with the privacy granted to users of cable TV. The code was amended to allow the government to have access to the records of cable customers, with the notable exclusion of records revealing cable subscriber selection of video programming from a cable operator.
Section 212: Emergency disclosure of electronic communications
Section 212 (Emergency disclosure of electronic communications to protect life and limb) amended the US Code to stop a communications provider from providing communication records (not necessarily relating to the content itself) about a customer's communications to others. However, should the provider reasonably believe that an emergency involving immediate danger of death or serious physical injury to any person then the communications provider can now disclose this information. The act did not make clear what "reasonably" meant.
A communications provider could also disclose communications records if:
a court orders the disclosure of communications at the request of a government agency ()
the customer allows the information to be disclosed
if the service provider believes that they must do so to protect their rights or property.
This section was repealed by the Homeland Security Act of 2002 — this act also created the Homeland Security Department — and was replaced with a new and permanent emergency disclosure provision.
Section 213: Delayed search warrant notification
Section 213 (Authority for delaying notice of the execution of a warrant) amended the US Code to allow the notification of search warrants to be delayed.
This section has been commonly referred to as the "sneak and peek" section, a phrase originating from the FBI and not, as commonly believed, from opponents of the Patriot Act. The U.S. government may now legally search and seize property that constitutes evidence of a United States criminal offense without immediately telling the owner. The court may only order the delayed notification if they have reason to believe it would hurt an investigation — delayed notifications were already defined in — or, if a search warrant specified that the subject of the warrant must be notified "within a reasonable period of its execution," then it allows the court to extend the period before the notification is given, though the government must show "good cause". If the search warrant prohibited the seizure of property or communications, then the search warrant could then be delayed.
Before the Patriot Act was enacted, there were three cases before the United States district courts: United States v. Freitas, 800 F.2d 1451 (9th Cir. 1986); United States v. Villegas, 899 F.2d 1324 (2d Cir. 1990); and United States v. Simons, 206 F.3d 392 (4th Cir. 2000). Each determined that, under certain circumstances, it was not unconstitutional to delay the notification of search warrants.
Section 214: Pen register and trap and trace authority
FISA was amended by section 214 (Pen register and trap and trace authority under FISA) to clarify that pen register and trap and trace surveillance can be authorised to allow government agencies to gather foreign intelligence information. Where the law only allowed them to gather surveillance if there was evidence of international terrorism, it now gives the courts the power to grant trap and traces against:
non-U.S. citizens.
those suspected of being involved with international terrorism,
those undertaking clandestine intelligence activities
Any investigation against U.S. citizens must not violate the First Amendment to the United States Constitution.
Section 215: Access to records and other items under FISA
This section is commonly referred to as the "library records" provision because of the wide range of personal material that can be investigated.
FISA was modified by section 215 (Access to records and other items under the Foreign Intelligence Surveillance Act) of the Act to allow the Director of the FBI (or an official designated by the Director, so long as that official's rank is no lower than Assistant Special Agent in Charge) to apply for an order to produce materials that assist in an investigation undertaken to protect against international terrorism or clandestine intelligence activities. The Act gives an example to clarify what it means by "tangible things": it includes "books, records, papers, documents, and other items".
It is specified that any such investigation must be conducted in accordance with guidelines laid out in Executive Order 12333 (which pertains to United States intelligence activities). Investigations must also not be performed on U.S. citizens who are carrying out activities protected by the First Amendment to the Constitution of the United States.
Any order that is granted must be given by a FISA court judge or by a magistrate judge who is publicly designated by the Chief Justice of the United States to allow such an order to be given. Any application must prove that it is being conducted without violating the First Amendment rights of any U.S. citizens. The application can only be used to obtain foreign intelligence information not concerning a U.S. citizen or to protect against international terrorism or clandestine intelligence activities.
This section of the USA PATRIOT Act is controversial because the order may be granted ex parte, and once it is granted—in order to avoid jeopardizing the investigation—the order may not disclose the reasons behind why the order was granted.
The section carries a gag order stating that "No person shall disclose to any other person (other than those persons necessary to produce the tangible things under this section) that the Federal Bureau of Investigation has sought or obtained tangible things under this section". Senator Rand Paul stated that the non-disclosure is imposed for one year, though this is not explicitly mentioned in the section.
In order to protect anyone who complies with the order, FISA now prevents any person who complies with the order in "good faith" from being liable for producing any tangible goods required by the court order. The production of tangible items is not deemed to constitute a waiver of any privilege in any other proceeding or context.
As a safeguard, section 502 of FISA compels the Attorney General to inform the Permanent Select Committee on Intelligence of the House of Representatives and the Select Committee on Intelligence of the Senate of all such orders granted. In a semi-annual basis, the Attorney General must also provide a report to the Committee on the Judiciary of the House of Representatives and the Senate which details the total number of applications over the previous 6 months made for orders approving requests for the production of tangible things and the total number of such orders either granted, modified, or denied.
This section was reauthorized in 2011.
During a House Judiciary hearing on domestic spying on July 17, 2013 John C. Inglis, the deputy director of the surveillance agency, told a member of the House judiciary committee that NSA analysts can perform "a second or third hop query" through its collections of telephone data and internet records in order to find connections to terrorist organizations. "Hops" refers to a technical term indicating connections between people. A three-hop query means that the NSA can look at data not only from a suspected terrorist, but from everyone that suspect communicated with, and then from everyone those people communicated with, and then from everyone all of those people communicated with. NSA officials had said previously that data mining was limited to two hops, but Inglis suggested that the Foreign Intelligence Surveillance Court has allowed for data analysis extending "two or three hops".
However, in 2015, the Second Circuit appeals court ruled in ACLU v. Clapper that Section 215 of the Patriot Act did not authorize the bulk collection of phone metadata, which judge Gerard E. Lynch called a "staggering" amount of information.
On May 20, 2015, Paul spoke for ten and a half hours in opposition to the reauthorization of Section 215 of the Patriot Act.
At midnight on May 31, 2015, Section 215 expired. With the passage of the USA Freedom Act on June 2, 2015 the expired parts of law, including Section 215, were reported broadly as restored and renewed through 2019. But, the USA Freedom Act did not explicitly state that it was restoring the expired provisions of Section 215. Since such renewal language is nowhere to be found, the law amended the version of the Foreign Intelligence Surveillance Act that existed on October 25, 2001, prior to changes brought by the USA Patriot Act, rendering much of the amendment language incoherent. How this legislative issue will be fixed is not clear. The attempted amendments to Section 215 were intended to stop the NSA from continuing its mass phone data collection program. Instead, phone companies will retain the data and the NSA can obtain information about targeted individuals with permission from a federal court.
Section 216: Authority to issue pen registers and trap and trace devices
Section 216 (Modification of authorities relating to use of pen registers and trap and trace devices) deals with three specific areas with regards to pen registers and trap and trace devices: general limitations to the use of such devices, how an order allowing the use of such devices must be made, and the definition of such devices.
Limitations
details the exceptions related to the general prohibition on pen register and trap and trace devices. Along with gathering information for dialup communications, it allows for gathering routing and other addressing information. It is specifically limited to this information: the Act does not allow such surveillance to capture the actual information that is contained in the communication being monitored. However, organisations such as the EFF have pointed out that certain types of information that can be captured, such as URLs, can have content embedded in them. They object to the application of trap and trace and pen register devices to newer technology using a standard designed for telephones.
Making and carrying out orders
It also details that an order may be applied for ex parte (without the party it is made against present, which in itself is not unusual for search warrants), and allows the agency who applied for the order to compel any relevant person or entity providing wire or electronic communication service to assist with the surveillance. If the party whom the order is made against so requests, the attorney for the Government, law enforcement or investigative officer that is serving the order must provide written or electronic certification that the order applies to the targeted individual.
If a pen register or trap and trace device is used on a packet-switched data network, then the agency doing surveillance must keep a detailed log containing:
any officer or officers who installed the device and any officer or officers who accessed the device to obtain information from the network;
the date and time the device was installed, the date and time the device was uninstalled, and the date, time, and duration of each time the device is accessed to obtain information;
the configuration of the device at the time of its installation and any subsequent modification made to the device; and
any information which has been collected by the device
This information must be generated for the entire time the device is active, and must be provided ex parte and under seal to the court which entered the ex parte order authorizing the installation and use of the device. This must be done within 30 days after termination of the order.
Orders must now include the following information:
the identifying number of the device under surveillance
the location of the telephone line or other facility to which the pen register or trap and trace device is to be attached or applied
if a trap and trace device is installed, the geographic limits of the order must be specified
This section amended the non-disclosure requirements of by expanding to include those whose facilities are used to establish the trap and trace or pen register or to those people who assist with applying the surveillance order who must not disclose that surveillance is being undertaken. Before this it had only applied to the person owning or leasing the line.
Definitions
The following terms were redefined in the US Code's chapter 206 (which solely deals with pen registers and trap and trace devices):
Court of competent jurisdiction: defined in , subparagraph A was stricken and replaced to redefine the court to be any United States district court (including a magistrate judge of such a court) or any United States court of appeals having jurisdiction over the offense being investigated (title 18 also allows State courts that have been given authority by their State to use pen register and trap and trace devices)
Pen register: defined in , the definition of such a device was expanded to include a device that captures dialing, routing, addressing, or signaling information from an electronics communication device. It limited the usage of such devices to exclude the capturing of any of the contents of communications being monitored. was also similarly amended.
Trap and trace device: defined in , the definition was similarly expanded to include the dialing, routing, addressing, or signaling information from an electronics communication device. However, a trap and trace device can now also be a "process", not just a device.
Contents: clarifies the term "contents" (as referred to in the definition of trap and trace devices and pen registers) to conform to the definition as defined in , which when used with respect to any wire, oral, or electronic communication, includes any information concerning the substance, purport, or meaning of that communication.
Section 217: Interception of computer trespasser communications
Section 217 (Interception of computer trespasser communications) firstly defines the following terms:
Protected computer: this is defined in , and is any computer that is used by a financial institution or the United States Government or one which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States.
Computer trespasser: this is defined in and references to this phrase means
a person who accesses a protected computer without authorization and thus has no reasonable expectation of privacy in any communication transmitted to, through, or from the protected computer; and
does not include a person known by the owner or operator of the protected computer to have an existing contractual relationship with the owner or operator of the protected computer for access to all or part of the protected computer
Amendments were made to to make it lawful to allow a person to intercept the communications of a computer trespasser if
the owner or operator of the protected computer authorizes the interception of the computer trespasser's communications on the protected computer,
the person is lawfully engaged in an investigation,
the person has reasonable grounds to believe that the contents of the computer trespasser's communications will be relevant to their investigation, and
any communication captured can only relate to those transmitted to or from the computer trespasser.
Section 218: Foreign intelligence information
Section 218 (Foreign intelligence information) amended and (both FISA sections 104(a) (7)(B) and section 303(a)(7)(B), respectively) to change "the purpose" of surveillance orders under FISA to gain access to foreign intelligence to "significant purpose". Mary DeRosa, in The Patriot Debates, explained that the reason behind this was to remove a legal "wall" which arose when criminal and foreign intelligence overlapped. This was because the U.S. Department of Justice interpreted "the purpose" of surveillance was restricted to collecting information for foreign intelligence, which DeRosa says "was designed to ensure that prosecutors and criminal investigators did not use FISA to circumvent the more rigorous warrant requirements for criminal cases". However, she also says that it is debatable whether this legal tightening of the definition was even necessary, stating that "the Department of Justice argued to the FISA Court of Review in 2002 that the original FISA standard did not require the restrictions that the Department of Justice imposed over the years, and the court appears to have agreed [which] leaves the precise legal effect of a sunset of section 218 somewhat murky."
Section 219: Single-jurisdiction search warrants for terrorism
Section 219 (Single-jurisdiction search warrants for terrorism) amended the Federal Rules of Criminal Procedure to allow a magistrate judge who is involved in an investigation of domestic terrorism or international terrorism the ability to issue a warrant for a person or property within or outside of their district.
Section 220: Nationwide service of search warrants for electronic evidence
Section 220 (Nationwide service of search warrants for electronic evidence) gives the power to Federal courts to issue nationwide service of search warrants for electronic surveillance. However, only courts with jurisdiction over the offense can order such a warrant. This required amending and .
Section 221: Trade sanctions
Section 221 (Trade sanctions) amended the Trade Sanctions Reform and Export Enhancement Act of 2000. This Act prohibits, except under certain specific circumstances, the President from imposing a unilateral agricultural sanction or unilateral medical sanction against a foreign country or foreign entity. The Act holds various exceptions to this prohibition, and the Patriot Act further amended the exceptions to include holding sanctions against countries that design, develop or produce chemical or biological weapons, missiles, or weapons of mass destruction. It also amended the act to include the Taliban as state sponsors of international terrorism. In amending Title IX, section 906 of the Trade sanctions act, the Taliban was determined by the Secretary of State to have repeatedly provided support for acts of international terrorism and the export of agricultural commodities, medicine, or medical devices is now pursuant to one-year licenses issued and reviewed by the United States Government. However, the export of agricultural commodities, medicine, or medical devices to the Government of Syria or to the Government of North Korea were exempt from such a restriction.
The Patriot Act further states that nothing in the Trade Sanctions Act will limit the application of criminal or civil penalties to those who export agricultural commodities, medicine, or medical devices to:
foreign entities who commit acts of violence to disrupt the Middle East peace process
those deemed to be part of a Foreign Terrorist Organization under the Antiterrorism and Effective Death Penalty Act of 1996
foreign entities or individuals deemed to support terrorist activities
any entity that is involved in drug trafficking
any foreign entity or individual who is subject to any restriction for involvement in weapons of mass destruction or missile proliferation.
Section 222: Assistance to law enforcement agencies
Section 222 (Assistance to law enforcement agencies) states that nothing in the Patriot Act shall make a communications provider or other individual provide more technical assistance to a law enforcement agency than what is set out in the Act. It also allows for the reasonable compensation of any expenses incurred while assisting with the establishment of pen registers or trap and trace devices.
Section 223: Civil liability for certain unauthorized disclosures
allows any person who has had their rights violated due to the illegal interception of communications to take civil action against the offending party. Section 223 (Civil liability for certain unauthorized disclosures) excluded the United States from such civil action.
If a court or appropriate department or agency determines that the United States or any of its departments or agencies has violated any provision of chapter 119 of the U.S. Code they may request an internal review from that agency or department. If necessary, an employee may then have administrative action taken against them. If the department or agency do not take action, then they must inform the notify the Inspector General who has jurisdiction over the agency or department, and they must give reasons to them why they did not take action.
A citizen's rights will also be found to have been violated if an investigative, law enforcement officer or governmental entity discloses information beyond that allowed in .
U.S. Code Title 18, Section 2712 added
A totally new section was appended to Title 18, Chapter 121 of the US Code: Section 2712, "Civil actions against the United States". It allows people to take action against the US Government if they feel that they had their rights violated, as defined in chapter 121, chapter 119, or sections 106(a), 305(a), or 405(a) of FISA. The court may assess damages no less than $US10,000 and litigation costs that are reasonably incurred. Those seeking damages must present them to the relevant department or agency as specified in the procedures of the Federal Tort Claims Act.
Actions taken against the United States must be initiated within two years of when the claimant has had a reasonable chance to discover the violation. All cases are presented before a judge, not a jury. However, the court will order a stay of proceedings if they determine that if during the court case civil discovery will hurt the ability of the government to conduct a related investigation or the prosecution of a related criminal case. If the court orders the stay of proceedings they will extend the time period that a claimant has to take action on a reported violation. However, the government may respond to any action against it by submitting evidence ex parte in order to avoid disclosing any matter that may adversely affect a related investigation or a related criminal case. The plaintiff is then given an opportunity to make a submission to the court, not ex parte, and the court may request further information from either party.
If a person wishes to discover or obtain applications or orders or other materials relating to electronic surveillance or to discover, obtain, or suppress evidence or information obtained or derived from electronic surveillance under FISA, then the Attorney General may file an affidavit under oath that disclosure or an adversary hearing would harm the national security of the United States. In these cases, the court may review in camera and ex parte the material relating to the surveillance to make sure that such surveillance was lawfully authorized and conducted. The court may then disclose part of material relating to the surveillance. However, the court is restricted in they may only do this "where such disclosure is necessary to make an accurate determination of the legality of the surveillance". If it then determined that the use of a pen register or trap and trace device was not lawfully authorized or conducted, the result of such surveillance may be suppressed as evidence. However, should the court determine that such surveillance was lawfully authorised and conducted, they may deny the motion of the aggrieved person.
It is further stated that if a court or appropriate department or agency determines that an officer or employee of the United States willfully or intentionally violated any provision of chapter 121 of the U.S. Code they will request an internal review from that agency or department. If necessary, an employee may then have administrative action taken against them. If the department or agency do not take action, then they must inform the notify the Inspector General who has jurisdiction over the agency or department, and they must give reasons to them why they did not take action. (see for a similar part of the Act)
Section 224: Sunset
Section 224 (Sunset) is a sunset clause. Title II and the amendments made by the title originally would have ceased to have effect on December 31, 2005, with the exception of the below sections. However, on December 22, 2005, the sunset clause expiration date was extended to February 3, 2006, and then on February 2, 2006 it was further extended to March 10:
Further, any particular foreign intelligence investigations that are ongoing will continue to be run under the expired sections.
Section 225: Immunity for compliance with FISA wiretap
Section 225 (Immunity for compliance with FISA wiretap) gives legal immunity to any provider of a wire or electronic communication service, landlord, custodian, or other person that provides any information, facilities, or technical assistance in accordance with a court order or request for emergency assistance. This was added to FISA as section 105 ().
Notes and references
External links
Text of the USA PATRIOT Act
EPIC – USA PATRIOT Act pages
CDT – USA PATRIOT Act Overview
ACLU – Reform the USA PATRIOT Act
US DOJ's USA PATRIOT Act site
Title II
Privacy of telecommunications
Privacy law in the United States |
15500572 | https://en.wikipedia.org/wiki/1882%20Troy%20Trojans%20season | 1882 Troy Trojans season | The 1882 season was to be the last for the Troy Trojans. The team finished at 35–48, in seventh place in the National League, and were disbanded after the season.
Regular season
Season standings
Record vs. opponents
Roster
Player stats
Batting
Starters by position
Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in
Other batters
Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in
Pitching
Starting pitchers
Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts
Relief pitchers
Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts
References
1882 Troy Trojans season at Baseball Reference
Troy Trojans (MLB team) seasons
Troy Trojans season
Troy Trojans |
61701120 | https://en.wikipedia.org/wiki/Google%20Chat | Google Chat | Google Chat (formerly known as Hangouts Chat) is a communication software developed by Google built for teams that provides direct messages and team chat rooms, along with a group messaging function that allows Google Drive content sharing. It is one of two apps that constitute the replacement for Google Hangouts, the other being Google Meet. Google planned to begin retiring Google Hangouts in October 2019.
It was initially available only to Google Workspace (formerly G Suite until October 2020) customers only, with identical features in all packages except a lack of Vault data retention in the Basic package. However, in October 2020, Google announced plans to open Google Chat up to consumers in 2021, once Hangouts has been officially retired, and Chat began to roll out to consumer accounts in "early access" in February 2021. Hangouts will remain a consumer-level product for people using standard Google accounts. By April 2021, Google Chat became fully available for free as an "Early Access" service, for users who choose to use it instead of Hangouts.
Google plans to deprecate Google Hangouts and replace it with Google Chat in early 2022. In February 2022, Google announced plans to begin the final migration of all Google Workspace customers from Google Hangouts to Google Chat, which is scheduled to be finalized in May 2022.
History
Google Chat was called Hangouts Chat before it was rebranded. Following the rebranding, and along with a similar change for Hangouts Meet, the Hangouts brand is to be removed from Google Workspace.
In August 2021, Google started automatically signing out Google Hangouts users on iOS and Android and notifying them to switch to Google Chat.
See also
Chat, the name used by Google for Rich Communication Services (RCS) in the Google Messages app
Google Allo, a defunct instant messaging service released by Google in 2016
Google Talk, a defunct instant messaging service released by Google in 2005
Google Voice, the Google telephone call routing and voicemail service
Google Meet
Google Duo
References
Further reading
IOS software
Android (operating system) software
Cross-platform software
2017 software
Google instant messaging software |
11150343 | https://en.wikipedia.org/wiki/Solid%20Edge | Solid Edge | Solid Edge is a 3D CAD, parametric feature (history based) and synchronous technology solid modeling software. It runs on Microsoft Windows and provides solid modeling, assembly modelling and 2D orthographic view functionality for mechanical designers. Through third party applications it has links to many other Product Lifecycle Management (PLM) technologies.
Originally developed and released by Intergraph in 1996 using the ACIS geometric modeling kernel, it changed to using the Parasolid kernel when it was purchased and further developed by UGS Corp in 1998. In 2007, UGS was acquired by the Automation & Drives Division of Siemens AG. UGS company was renamed Siemens PLM Software on October 1, 2007.
Since September 2006, Siemens has also offered a free 2D version called Solid Edge 2D Drafting. Solid Edge is available in Design and Drafting, Foundation, Classic or Premium. The "Premium" package includes all of the features of "Classic" plus mechanical and electrical routing software, and powerful engineering simulation capabilities for Computer Aided Engineering (CAE).
Solid Edge is a direct competitor to SolidWorks, Creo, Inventor, IRONCAD, and others.
Release History
Modeling
Ordered
The ordered modeling process begins with a base feature controlled by a 2D sketch, which is either a linear, revolved, lofted, or swept extrusion. Each subsequent feature is built on the previous feature. When editing, the model is "rolled back" to the point where the feature was created so that the user cannot try to apply constraints to geometry that does not yet exist. The drawback is that the user does not see how the edit will interact with the subsequent features. This is typically called "history" or "regeneration based" modeling. In both ordered and synchronous mode Solid Edge offers very powerful, easy yet stable modeling in hybrid surface/solid mode, where "Rapid Blue" technology helps the user to create complex shapes in an intuitive and easy way.
Direct
The Direct modeling features allows the user to change model geometry/topology without being hindered by a native model's existing—or an imported model's lack of—parametric and/or history data. This is particularly useful for working with imported models or complex native models. Direct modeling features are available in both Ordered and Synchronous mode. If used in the Ordered mode, the direct modeling edits are appended to the history tree at the point of current rollback just like any other ordered feature.
Synchronous
The software combines direct modeling with dimension driven design (features and synchronously solving parametrics) under the name "Synchronous Technology". Parametric relationships can be applied directly to the solid features without having to depend on 2D sketch geometry, and common parametric relationships are applied automatically.
Unlike other direct modeling systems, it is not driven by the typical history-based modeling system, instead providing parametric dimension-driven modeling by synchronizing geometry, parameters and rules using a decision-making engine, allowing users to apply unpredicted changes. This object-driven editing model is known as the Object Action Interface, which emphasizes a User Interface that provides Direct Manipulation of objects (DMUI). ST2 added support for sheet metal designing, and also recognizing bends, folds and other features of imported sheet metal parts.
Synchronous Technology has been integrated into Solid Edge and another Siemens commercial CAD software, NX, as an application layer built on the D-Cubed and Parasolid software components.
Convergent Modeling
With Solid Edge ST10, Siemens introduced Convergent Modeling which adds the ability to work with polygon mesh data alongside more traditional solid and surface modelling techniques.
Assembly
An assembly is built from individual part documents connected by mating constraints, as well as assembly features and directed parts like frames which only exist in the Assembly context. Solid Edge supports large assemblies with over 1,000,000 parts.
Features
A draft file consists of the 3D model projected to one or more 2D views of a part or assembly file.
Solid Edge integrates with Windows Indexing, SharePoint or Teamcenter to provide product lifecycle management. Solid Edge also integrates with PLM products from third parties. Solid Edge ST9 brought a new data management capability that leverages the Windows file indexing service to add basic data management functionality without the need for an additional server or set-up.
Solid Edge provides support for Finite Element Analysis (FEA) starting with Solid Edge ST2 version released in 2009. This functionality is based on Siemens PLM's existing Femap and NX Nastran technologies. From Solid Edge 2019 there was also Computational Fluid Dynamics functionality added from Mentor's FloEFD, and with Solid Edge 2020 Rigid Body Motion, Transient Dynamic analysis was added.
See also
Freeform surface modelling
Comparison of CAD Software
Comparison of CAD editors for CAE
References
Screw Jack assembly in Solid Edge
External links
Computer-aided design software
Siemens software products
1995 software |
25616444 | https://en.wikipedia.org/wiki/TurboPrint | TurboPrint | TurboPrint is a closed source printer driver system for Linux, AmigaOS and MorphOS. It supports a number of printers that don't yet have a free driver, and fuller printer functionality on some printer models. In recent versions, it integrates with the CUPS printing system.
References
Notes
Carla Schroder (December 16, 2009), TurboPrint for Linux Saves the Day-- Again, linuxplanet.com
A. Lizard, (November 06, 2006) Turning SLED10 Linux Into a Practical User Desktop, Dr. Dobb's
Michael Kofler, Jetzt lerne ich Linux im Büro: Office-Aufgaben einfach und sicher unter Linux meistern, Pearson Education, 2004, , p. 95
Andreas Proschofsky, Turboprint 2: Professionelle Linux-Drucker-Treiber in neuer Version, 8 July 2008, Der Standard
Christian Verhille, Mandriva Linux 2007, pp. 278-280, Editions ENI, 2006,
Fulvio Peruggi (2007), TurboPrint 7.60 on MorphOS
Computer printing
Amiga software
Linux software
MorphOS
MorphOS software |
47870281 | https://en.wikipedia.org/wiki/Dr.%20A.%20Q.%20Khan%20Institute%20of%20Computer%20Sciences%20and%20Information%20Technology | Dr. A. Q. Khan Institute of Computer Sciences and Information Technology | Dr. A. Q. Khan Institute of Computer Sciences and Information Technology commonly known as KICSIT is a sub-campus of Institute of Space Technology located in Kahuta, Rawalpindi, Punjab.Dr. A. Q. Khan Institute of Computer Sciences and Information Technology (KICSIT), Kahuta was inaugurated in November 2000 by Dr. Abdul Qadeer Khan, the founder and then Chairman of KRL.
Background
Initially, KICSIT offered Bachelor of Science in Computer Science (BSCS) program in affiliation with Gomal University, D I Khan. Later in 2001, 4-year Bachelor of Engineering in Information Technology (BEIT) programme was started in affiliation with University of Engineering and Technology (UET), Taxila. Since Spring 2013 BEIT has been converted into BSIT (Bachelor of Science in Information Technology) which is approved by National Computing Education Accreditations Council (NCEAC). BSIT is run in affiliation with UET Taxila.
Facilities
Electronics and Physics labs have equipment needed for required experiments and training. There is common room for girls students.
Degree programs
BS (CS)
The Bachelor of Science in Computer Science (BSCS) Degree program is affiliated with University of Engineering & Technology, Taxila. The medium of instruction at KICSIT is English except for Islamic Studies.
Each semester comprises sixteen weeks of teaching. Mid Semester Examination are held after eighth week. The seventeenth week shall be of preparatory holidays for End Semester Examination which shall be held in the eighteenth week.
BS (CE)
The Bachelor of Science in Computer Engineering (BSCE) Degree program is affiliated with Institute of space technology islamabad and spread over four academic years. Each semester comprises sixteen weeks of teaching. This Program is accredited by Pakistan Engineering Council.
See also
Khan Research Laboratories
References
External links
Engineering universities and colleges in Pakistan
Public universities and colleges in Punjab, Pakistan
Universities and colleges in Rawalpindi District |
64359185 | https://en.wikipedia.org/wiki/Leah%20Pruitt | Leah Pruitt | Leah Pruitt (born September 5, 1997) is an American professional soccer player who plays as a forward for Orlando Pride of the National Women's Soccer League.
Early life
Growing up in Rancho Cucamonga, California, Pruitt was a four-year varsity player at Alta Loma High School, breaking the school record for goals in a single season in 2015 with 41 and was a three-time all-league honoree. She began playing club soccer at Sporting California Arsenal FC. On June 27, 2012, the team won the Elite Clubs National League (ECNL) under-14 National Championship, beating San Diego Surf 2–0 in the final held in Chicago. She later moved to West Coast FC, helping the team reach another ECNL National Championship final. In June 2016, Pruitt won the ECNL under-18 National Championship title with Slammers FC, scoring in a 2–0 victory over Michigan Hawks in the final. Pruitt was also a five-year member of the Cal South PRO+ Olympic Development Program.
College career
Pruitt was scouted by San Diego State University when her father contacted the coaches about her older sister Charlee's interest in joining the team as a goalkeeper. Head coach Mike Friesen came to watch their club team play a tournament in Arizona in summer 2012 and, although he had a backlog of goalkeepers, Friesen was interested in signing the team's 4 ft 11in forward, Leah. She committed that December, two years before she could sign a national letter of intent: "SDSU really wanted me, you could tell. I loved how the coaches were really into me coming here. I knew it was the right choice." Pruitt was 5 ft 8in by the time she debuted for the Aztecs in 2015 and was an immediate starter, starting all 18 games before a knee injury kept her out of the final two games of the season. As a freshman Pruitt led the team with 10 goals and nine assists. She was named Mountain West Conference Freshman of the Year and was an All-Mountain West first team selection as the Aztecs won the regular season title and finished as runners-up in the conference tournament, losing the final to San Jose State Spartans in a penalty shootout.
Pruitt transferred to the University of Southern California in 2016. She made 21 appearances for the USC Trojans as a sophomore, all as a substite, scoring four goals and creating eight assists. USC won the 2016 NCAA Division I Women's Soccer Tournament title with a 3–1 win against West Virginia. Pruitt registered an assist on a Katie Johnson goal during the final. In her junior year, Pruitt started all 20 games, scoring six goals and four assists on her way to All Pac-12 Conference second team honors. In 2018, Pruitt played in all 22 games (starting 21), and put up career-high numbers in both goals with 12 and assists with nine. She earned All Pac-12 and All-Pacific Region first team honors, and was recognized nationally with United Soccer Coaches All-America third team and TopDrawerSoccer.com Best XI third team honors.
Club career
LA Villa
In 2018, Pruitt played for LA Villa in the Women's Premier Soccer League. She scored four goals and registered two assists in five appearances as LA Villa finished third in the Coastal Conference. She was named Coastal Conference Offensive Player of the Year at the end of season awards in August 2018.
North Carolina Courage
Pruitt was drafted in the first round (5th overall) of the 2019 NWSL College Draft by the North Carolina Courage. She made her professional debut in the season opener on April 13, 2019, entering as a 71st-minute substitute for McCall Zerboni in a 1–1 tie with Chicago Red Stars. She scored her first goal on April 28 as part of a 4–1 win away at Houston Dash. In total, Pruitt played 556 minutes in 11 appearances as a rookie, scoring two goals and an assist before a knee injury sidelined her for the final two months including North Carolina's run to the NWSL Championship title having also won the NWSL Shield.
With the 2020 season disrupted by the COVID-19 pandemic during preseason in March, the season didn't start until the 2020 NWSL Challenge Cup in June. Still injured, Pruitt was waived by North Carolina on June 23 as part of the Challenge Cup squad announcement.
OL Reign
Four days after being waived by North Carolina, Pruitt was selected off waivers by OL Reign and immediately placed on the team's 45-day disabled list. "Since I was going through an injury, it was hard in the beginning. Once I got picked up by OL Reign, it was super exciting. It sparked my fire to come up here and be a part of this club." With no regular season following the conclusion of the Challenge Cup, the NWSL scheduled a replacement Fall Series. Pruitt made her club debut in the team's opening Fall Series game on September 26 as a 70th-minute substitute for Jasmyne Spencer in a 2–2 tie with Utah Royals. Pruitt played a total of 132 minutes across all four Fall Series games and scored her first Reign goal in the final game of the series on October 17 having also made her first start for the team in a 2–0 win over Utah.
Ahead of the 2021 season, Pruitt signed a new three-year contract with OL Reign. On the signing, head coach Farid Benstiti said "Leah has exceeded all expectations since she joined our club before the Challenge Cup. She worked incredibly hard to recover from injury, which enabled her to make impact in each of our matches in the Fall Series. We believe Leah has tremendous potential and I am excited to be working with her this season." She played 15 games in all competitions, scoring once during the 2021 NWSL Challenge Cup.
Orlando Pride
On December 18, 2021, Pruitt was traded during the 2022 NWSL Draft by new Reign head coach Laura Harvey along with Celia, the 10th overall pick, and a second-round pick in the 2023 NWSL Draft to Orlando Pride in exchange for Phoebe McClernon.
International
In August 2011, Pruitt was invited to the annual U.S. under-14 National Team Identification Camp held in Portland, Oregon. It featured 72 players and was run by U.S. Soccer Women's Development Director Jill Ellis. In January 2014, Pruitt was called up to the under-17 national team by B. J. Snow for a 24-player training camp at the U.S. Soccer National Training Center in Carson, California. In October 2014, Pruitt was named to an under-20 training camp by Michelle French, and attended a further two training camps with the under-19s in July 2015 and May 2016 in the build-up to the 2016 FIFA U-20 Women's World Cup. In May 2018, Pruitt was called up to an under-23 training camp.
Personal life
Pruitt's father, Aaron, played football as an inside linebacker at San Diego State in the mid-1990s. Her older sister, Charlee, also played soccer and was a four-year starter as a goalkeeper at Loyola Marymount 2015 to 2018.
Career statistics
College
Club
.
Honors
College
San Diego State Aztecs
Mountain West Conference regular season: 2015
USC Trojans
NCAA Women's College Cup: 2016
Club
North Carolina Courage
NWSL Championship: 2019
NWSL Shield: 2019
Individual
Mountain West Conference Freshman of the Year: 2015
References
External links
NWSL profile
North Carolina Courage profile
USC Trojans profile
1997 births
Living people
American women's soccer players
Soccer players from California
People from Rancho Cucamonga, California
San Diego State Aztecs women's soccer players
USC Trojans women's soccer players
North Carolina Courage draft picks
North Carolina Courage players
OL Reign players
Orlando Pride players
Women's Premier Soccer League players
National Women's Soccer League players
Women's association football forwards |
20684820 | https://en.wikipedia.org/wiki/BCL%20Molecular | BCL Molecular | The BCL Molecular 18 was a range of 18-bit computers designed and manufactured in the UK from 1970 until the late 1980s.The machines were originally manufactured by Systemation Limited and serviced by Business Mechanisation Limited. The two companies merged in 1968 to form Business Computers Limited - a public limited company. Business Computers Ltd subsequently went into receivership in 1974. It was purchased from the receiver by Computer World Trade, maintenance of existing machines was by a subsidiary of CWT called CFM, manufacturing was passed to ABS Computer in the old BCL building and sales rights were sold to a team from the old Singer Computers by 1976 trading as Business Computers (Systems) Ltd selling the Molecular. BC(S) Ltd subsequently went public in 1981 to form Business Computers (Systems) Plc. Servicing and manufacturing was gradually taken over by Systemation Services/ Systemation Developments Ltd. BC(S)Plc was eventually taken over by Electronic Data Processing (EDP). Amongst its users and service engineers it was affectionately known as the Molly.
Note that neither SADIE nor SUSIE shared any technology with the Molecular series.
External links
BCL Molecular
BCL Molecular 18 Minicomputer
BCL Susie Computer @ The Centre for Computing History
Minicomputers
18-bit computers |
479337 | https://en.wikipedia.org/wiki/Mozilla%20Sunbird | Mozilla Sunbird | Mozilla Sunbird is a discontinued free and open-source, cross-platform calendar application that was developed by the Mozilla Foundation, Sun Microsystems and many volunteers. Mozilla Sunbird was described as "a cross platform standalone calendar application based on Mozilla's XUL user interface language". Announced in July 2003, Sunbird was a standalone version of the Mozilla Calendar Project.
It was developed as a standalone version of the Lightning calendar and scheduling extension for the Mozilla Thunderbird and SeaMonkey mail clients. Development of Sunbird was ended with release 1.0 beta 1 to focus on development of Mozilla Lightning. The latest development version of Sunbird remains 1.0b1 from January 2010, and no later version has been announced. Unlike Lightning, Sunbird no longer receives updates to its time zone database.
Sun contributions
Sun Microsystems contributed significantly to the Lightning extension project to provide users with a free and open-source alternative to Microsoft Office by combining OpenOffice.org and Thunderbird/Lightning. Sun's key focus areas in addition to general bug fixing were calendar views, team/collaboration features and support for the Sun Java System Calendar Server. Since both projects share the same code base, any contribution to one is a direct contribution to the other.
Trademark issues and Iceowl
Although it is released under a MPL, MPL/GPL/LGPL tri-license, there are trademark restrictions in place on Mozilla Sunbird which prevent the distribution of modified versions with the Mozilla branding.
As a result, the Debian project created Iceowl, a virtually identical version without the branding restrictions.
Release history
See also
Lightning for Mozilla Thunderbird and SeaMonkey
List of personal information managers
List of applications with iCalendar support
References
External links
MozillaWiki
The Sunbird development blog
Sunbird Portable by PortableApps.com
Linux sunbird installer
Mozilla
Free calendaring software
Personal information managers
Cross-platform software
C++ software
Gecko-based software
Portable software
2003 software
Software using the Mozilla license
Software that uses XUL
Software that uses SQLite
Discontinued software |
533867 | https://en.wikipedia.org/wiki/Backup | Backup | In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup". Backups can be used to recover data after its loss from data deletion or corruption, or to recover data from an earlier time. Backups provide a simple form of disaster recovery; however not all backup systems are able to reconstitute a computer system or other complex configuration such as a computer cluster, active directory server, or database server.
A backup system contains at least one copy of all data considered worth saving. The data storage requirements can be large. An information repository model may be used to provide structure to this storage. There are different types of data storage devices used for copying backups of data that is already in secondary storage onto archive files. There are also different ways these devices can be arranged to provide geographic dispersion, data security, and portability.
Data is selected, extracted, and manipulated for storage. The process can include methods for dealing with live data, including open files, as well as compression, encryption, and de-duplication. Additional techniques apply to enterprise client-server backup. Backup schemes may include dry runs that validate the reliability of the data being backed up. There are limitations and human factors involved in any backup scheme.
Storage
A backup strategy requires an information repository, "a secondary storage space for data" that aggregates backups of data "sources". The repository could be as simple as a list of all backup media (DVDs, etc.) and the dates produced, or could include a computerized index, catalog, or relational database.
The backup data needs to be stored, requiring a backup rotation scheme, which is a system of backing up data to computer media that limits the number of backups of different dates retained separately, by appropriate re-use of the data storage media by overwriting of backups no longer needed. The scheme determines how and when each piece of removable storage is used for a backup operation and how long it is retained once it has backup data stored on it.
3-2-1 rule
The 3-2-1 rule can aid in the backup process. It states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can include cloud storage). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due to head crashes or damaged spindle motors since they don't have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes. Disaster protected hard drives like those made by ioSafe are an alternative to an offsite copy, but they have limitations like only being able to resist fire for a limited period of time, so an offsite copy still remains as the ideal choice.
Backup methods
Unstructured
An unstructured repository may simply be a stack of tapes, DVD-Rs or external HDDs with minimal information about what was backed up and when. This method is the easiest to implement, but unlikely to achieve a high level of recoverability as it lacks automation.
Full only/System imaging
A repository using this backup method contains complete source data copies taken at one or more specific points in time. Copying system images, this method is frequently used by computer technicians to record known good configurations. However, imaging is generally more useful as a way of deploying a standard configuration to many systems rather than as a tool for making ongoing backups of diverse systems.
Incremental
An incremental backup stores data changed since a reference point in time. Duplicate copies of unchanged data aren't copied. Typically a full backup of all files is once or at infrequent intervals, serving as the reference point for an incremental repository. Subsequently, a number of incremental backups are made after successive time periods. Restores begin with the last full backup and then apply the incrementals.
Forever incremental backup starts with one initial full backup and later on only incremental backups will be created. The benefits of forever incremental backup would be less backup storage and less bandwidth usage, and users can schedule backups more frequently to achieve shorter RPO. Some backup systems can create a from a series of incrementals, thus providing the equivalent of frequently doing a full backup. When done to modify a single archive file, this speeds restores of recent versions of files.
Near-CDP
Continuous Data Protection (CDP) refers to a backup that instantly saves a copy of every change made to the data. This allows restoration of data to any point in time and is the most comprehensive and advanced data protection. Near-CDP backup applications—often marketed as "CDP"—automatically take incremental backups at a specific interval, for example every 15 minutes, one hour, or 24 hours. They can therefore only allow restores to an interval boundary. Near-CDP backup applications use journaling and are typically based on periodic "snapshots", read-only copies of the data frozen at a particular point in time.
Near-CDP (except for Apple Time Machine) intent-logs every change on the host system, often by saving byte or block-level differences rather than file-level differences. This backup method differs from simple disk mirroring in that it enables a roll-back of the log and thus a restoration of old images of data. Intent-logging allows precautions for the consistency of live data, protecting self-consistent files but requiring applications "be quiesced and made ready for backup."
Near-CDP is more practicable for ordinary personal backup applications, as opposed to true CDP, which must be run in conjunction with a virtual machine or equivalent and is therefore generally used in enterprise client-server backups.
Reverse incremental
A Reverse incremental backup method stores a recent archive file "mirror" of the source data and a series of differences between the "mirror" in its current state and its previous states. A reverse incremental backup method starts with a non-image full backup. After the full backup is performed, the system periodically synchronizes the full backup with the live copy, while storing the data necessary to reconstruct older versions. This can either be done using hard links—as Apple Time Machine does, or using binary diffs.
Differential
A differential backup saves only the data that has changed since the last full backup. This means a maximum of two backups from the repository are used to restore the data. However, as time from the last full backup (and thus the accumulated changes in data) increases, so does the time to perform the differential backup. Restoring an entire system requires starting from the most recent full backup and then applying just the last differential backup.
A differential backup copies files that have been created or changed since the last full backup, regardless of whether any other differential backups have been made since, whereas an incremental backup copies files that have been created or changed since the most recent backup of any type (full or incremental). Changes in files may be detected through a more recent date/time of last modification file attribute, and/or changes in file size. Other variations of incremental backup include multi-level incrementals and block-level incrementals that compare parts of files instead of just entire files.
Storage media
Regardless of the repository model that is used, the data has to be copied onto an archive file data storage medium. The medium used is also referred to as the type of backup destination.
Magnetic tape
Magnetic tape was for a long time the most commonly used medium for bulk data storage, backup, archiving, and interchange. It was previously a less expensive option, but this is no longer the case for smaller amounts of data. Tape is a sequential access medium, so the rate of continuously writing or reading data can be very fast. While tape media itself has a low cost per space, tape drives are typically dozens of times as expensive as hard disk drives and optical drives.
Many tape formats have been proprietary or specific to certain markets like mainframes or a particular brand of personal computer. By 2014 LTO had become the primary tape technology. The other remaining viable "super" format is the IBM 3592 (also referred to as the TS11xx series). The Oracle StorageTek T10000 was discontinued in 2016.
Hard disk
The use of hard disk storage has increased over time as it has become progressively cheaper. Hard disks are usually easy to use, widely available, and can be accessed quickly. However, hard disk backups are close-tolerance mechanical devices and may be more easily damaged than tapes, especially while being transported. In the mid-2000s, several drive manufacturers began to produce portable drives employing ramp loading and accelerometer technology (sometimes termed a "shock sensor"), and by 2010 the industry average in drop tests for drives with that technology showed drives remaining intact and working after a 36-inch non-operating drop onto industrial carpeting. Some manufacturers also offer 'ruggedized' portable hard drives, which include a shock-absorbing case around the hard disk, and claim a range of higher drop specifications. Over a period of years the stability of hard disk backups is shorter than that of tape backups.
External hard disks can be connected via local interfaces like SCSI, USB, FireWire, or eSATA, or via longer-distance technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based backup systems, via Virtual Tape Libraries or otherwise, support data deduplication, which can reduce the amount of disk storage capacity consumed by daily and weekly backup data.
Optical storage
Optical storage uses lasers to store and retrieve data. Recordable CDs, DVDs, and Blu-ray Discs are commonly used with personal computers and are generally cheap. In the past, the capacities and speeds of these discs have been lower than hard disks or tapes, although advances in optical media are slowly shrinking that gap.
Potential future data losses caused by gradual media degradation can be predicted by measuring the rate of correctable minor data errors, of which consecutively too many increase the risk of uncorrectable sectors. Support for error scanning varies among optical drive vendors.
Many optical disc formats are WORM type, which makes them useful for archival purposes since the data cannot be changed. Moreover, optical discs are not vulnerable to head crashes, magnetism, imminent water ingress or power surges, and a fault of the drive typically just halts the spinning.
Optical media is modular; the storage controller is external and not tied to media itself like with hard drives or flash storage (flash memory controller), allowing it to be removed and accessed through a different drive. However, recordable media may degrade earlier under long-term exposure to light.
The lack of internal components and magnetism makes optical media unaffected by single event effects from ionizing radiation that can be caused by environmental disasters like a nuclear meltdown or solar storm.
Some optical storage systems allow for cataloged data backups without human contact with the discs, allowing for longer data integrity. A French study in 2008 indicated that the lifespan of typically-sold CD-Rs was 2–10 years, but one manufacturer later estimated the longevity of its CD-Rs with a gold-sputtered layer to be as high as 100 years. Sony's proprietary Optical Disc Archive can in 2016 reach a read rate of 250MB/s.
Solid-state drive (SSD)
Solid-state drives (SSDs) use integrated circuit assemblies to store data. Flash memory, thumb drives, USB flash drives, CompactFlash, SmartMedia, Memory Sticks, and Secure Digital card devices are relatively expensive for their low capacity, but convenient for backing up relatively low data volumes. A solid-state drive does not contain any movable parts, making it less susceptible to physical damage, and can have huge throughput of around 500 Mbit/s up to 6 Gbit/s. Available SSDs have become more capacious and cheaper. Flash memory backups are stable for fewer years than hard disk backups.
Remote backup service
Remote backup services or cloud backups involve service providers storing data offsite. This has been used to protect against events such as fires, floods, or earthquakes which could destroy locally stored backups. Cloud-based backup (through services like or similar to Google Drive, and Microsoft OneDrive) provides a layer of data protection. However, the users must trust the provider to maintain the privacy and integrity of their data, with confidentiality enhanced by the use of encryption. Because speed and availability are limited by a user's online connection, users with large amounts of data may need to use cloud seeding and large-scale recovery.
Management
Various methods can be used to manage backup media, striking a balance between accessibility, security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's needs. Using on-line disks for staging data before it is sent to a near-line tape library is a common example.
Online
Online backup storage is typically the most accessible type of data storage, and can begin a restore in milliseconds. An internal hard disk or a disk array (maybe connected to SAN) is an example of an online backup. This type of storage is convenient and speedy, but is vulnerable to being deleted or overwritten, either by accident, by malevolent action, or in the wake of a data-deleting virus payload.
Nearline
Nearline storage is typically less accessible and less expensive than online storage, but still useful for backup data storage. A mechanical device is usually used to move media units from storage into a drive where the data can be read or written. Generally it has safety properties similar to on-line storage. An example is a tape library with restore times ranging from seconds to a few minutes.
Off-line
Off-line storage requires some direct action to provide access to the storage media: for example, inserting a tape into a tape drive or plugging in a cable. Because the data is not accessible via any computer except during limited periods in which they are written or read back, they are largely immune to on-line backup failure modes. Access time varies depending on whether the media are on-site or off-site.
Off-site data protection
Backup media may be sent to an off-site vault to protect against a disaster or other site-specific problem. The vault can be as simple as a system administrator's home office or as sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with facilities for backup media storage. A data replica can be off-site but also on-line (e.g., an off-site RAID mirror). Such a replica has fairly limited value as a backup.
Backup site
A backup site or disaster recovery center is used to store data that can enable computer systems and networks to be restored and properly configure in the event of a disaster. Some organisations have their own data recovery centres, while others contract this out to a third-party. Due to high costs, backing up is rarely considered the preferred method of moving data to a DR site. A more typical way would be remote disk mirroring, which keeps the DR data as up to date as possible.
Selection and extraction of data
A backup operation starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored in discrete units, known as files. These files are organized into filesystems. Deciding what to back up at any given time involves tradeoffs. By backing up too much redundant data, the information repository will fill up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.
Files
Copying files : Making copies of files is the simplest and most common way to perform a backup. A means to perform this basic function is included in all backup software and all operating systems.
Partial file copying: A backup may include only the blocks or bytes within a file that have changed in a given period of time. This can substantially reduce needed storage space, but requires higher sophistication to reconstruct files in a restore situation. Some implementations require integration with the source file system.
Deleted files : To prevent the unintentional restoration of files that have been intentionally deleted, a record of the deletion must be kept.
Versioning of files : Most backup applications, other than those that do only full only/System imaging, also back up files that have been modified since the last backup. "That way, you can retrieve many different versions of a given file, and if you delete it on your hard disk, you can still find it in your [information repository] archive."
Filesystems
Filesystem dump: A copy of the whole filesystem in block-level can be made. This is also known as a "raw partition backup" and is related to disk imaging. The process usually involves unmounting the filesystem and running a program like dd (Unix). Because the disk is read sequentially and with large buffers, this type of backup can be faster than reading every file normally, especially when the filesystem contains many small files, is highly fragmented, or is nearly full. But because this method also reads the free disk blocks that contain no useful data, this method can also be slower than conventional reading, especially when the filesystem is nearly empty. Some filesystems, such as XFS, provide a "dump" utility that reads the disk sequentially for high performance while skipping unused sections. The corresponding restore utility can selectively restore individual files or the entire volume at the operator's choice.
Identification of changes: Some filesystems have an archive bit for each file that says it was recently changed. Some backup software looks at the date of the file and compares it with the last backup to determine whether the file was changed.
Versioning file system : A versioning filesystem tracks all changes to a file. The NILFS versioning filesystem for Linux is an example.
Live data
Files that are actively being updated present a challenge to back up. One way to back up live data is to temporarily quiesce them (e.g., close all files), take a "snapshot", and then resume live operations. At this point the snapshot can be backed up through normal methods. A snapshot is an instantaneous function of some filesystems that presents a copy of the filesystem as if it were frozen at a specific point in time, often by a copy-on-write mechanism. Snapshotting a file while it is being changed results in a corrupted file that is unusable. This is also the case across interrelated files, as may be found in a conventional database or in applications such as Microsoft Exchange Server. The term fuzzy backup can be used to describe a backup of live data that looks like it ran correctly, but does not represent the state of the data at a single point in time.
Backup options for data files that cannot be or are not quiesced include:
Open file backup: Many backup software applications undertake to back up open files in an internally consistent state. Some applications simply check whether open files are in use and try again later. Other applications exclude open files that are updated very frequently. Some low-availability interactive applications can be backed up via natural/induced pausing.
Interrelated database files backup: Some interrelated database file systems offer a means to generate a "hot backup" of the database while it is online and usable. This may include a snapshot of the data files plus a snapshotted log of changes made while the backup is running. Upon a restore, the changes in the log files are applied to bring the copy of the database up to the point in time at which the initial backup ended. Other low-availability interactive applications can be backed up via coordinated snapshots. However, genuinely-high-availability interactive applications can be only be backed up via Continuous Data Protection.
Metadata
Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping track of this non-file data too.
System description: System specifications are needed to procure an exact replacement after a disaster.
Boot sector : The boot sector can sometimes be recreated more easily than saving it. It usually isn't a normal file and the system won't boot without it.
Partition layout: The layout of the original disk, as well as partition tables and filesystem settings, is needed to properly recreate the original system.
File metadata : Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up for a restore to properly recreate the original environment.
System metadata: Different operating systems have different ways of storing configuration information. Microsoft Windows keeps a registry of system information that is more difficult to restore than a typical file.
Manipulation of data and dataset optimization
It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can improve backup speed, restore speed, data security, media usage and/or reduced bandwidth requirements.
Automated data grooming
Out-of-date data can be automatically deleted, but for personal backup applications—as opposed to enterprise client-server backup applications where automated data "grooming" can be customized—the deletion can at most be globally delayed or be disabled.
Compression
Various schemes can be employed to shrink the size of the source data to be stored so that it uses less storage space. Compression is frequently a built-in feature of tape drive hardware.
Deduplication
Redundancy due to backing up similarly configured workstations can be reduced, thus storing just one copy. This technique can be applied at the file or raw block level. This potentially large reduction is called deduplication. It can occur on a server before any data moves to backup media, sometimes referred to as source/client side deduplication. This approach also reduces bandwidth required to send backup data to its target media. The process can also occur at the target storage device, sometimes referred to as inline or back-end deduplication.
Duplication
Sometimes backups are duplicated to a second set of storage media. This can be done to rearrange the archive files to optimize restore speed, or to have a second copy at a different location or on a different storage medium—as in the disk-to-disk-to-tape capability of Enterprise client-server backup.
If backup media is unavailable, duplicates on the same device may allow merging files' intact parts using a byte editor in case of data corruption.
Encryption
High-capacity removable storage media such as backup tapes present a data security risk if they are lost or stolen. Encrypting the data on these media can mitigate this problem, however encryption is a CPU intensive process that can slow down backup speeds, and the security of the encrypted backups is only as effective as the security of the key management policy.
Multiplexing
When there are many more computers to be backed up than there are destination storage devices, the ability to use a single storage device with several simultaneous backups can be useful. However cramming the scheduled backup window via "multiplexed backup" is only used for tape destinations.
Refactoring
The process of rearranging the sets of backups in an archive file is known as refactoring. For example, if a backup system uses a single tape each day to store the incremental backups for all the protected computers, restoring one of the computers could require many tapes. Refactoring could be used to consolidate all the backups for a single computer onto a single tape, creating a "synthetic full backup". This is especially useful for backup systems that do incrementals forever style backups.
Staging
Sometimes backups are copied to a staging disk before being copied to tape. This process is sometimes referred to as D2D2T, an acronym for Disk-to-disk-to-tape. It can be useful if there is a problem matching the speed of the final destination device with the source device, as is frequently faced in network-based backup systems. It can also serve as a centralized location for applying other data manipulation techniques.
Objectives
Recovery point objective (RPO) : The point in time that the restarted infrastructure will reflect, expressed as "the maximum targeted period in which data (transactions) might be lost from an IT service due to a major incident". Essentially, this is the roll-back that will be experienced as a result of the recovery. The most desirable RPO would be the point just prior to the data loss event. Making a more recent recovery point achievable requires increasing the frequency of synchronization between the source data and the backup repository.
Recovery time objective (RTO) : The amount of time elapsed between disaster and restoration of business functions.
Data security : In addition to preserving access to data for its owners, data must be restricted from unauthorized access. Backups must be performed in a manner that does not compromise the original owner's undertaking. This can be achieved with data encryption and proper media handling policies.
Data retention period : Regulations and policy can lead to situations where backups are expected to be retained for a particular period, but not any further. Retaining backups after this period can lead to unwanted liability and sub-optimal use of storage media.
Checksum or hash function validation : Applications that back up to tape archive files need this option to verify that the data was accurately copied.
Backup process monitoring : Enterprise client-server backup applications need a user interface that allows administrators to monitor the backup process, and proves compliance to regulatory bodies outside the organization; for example, an insurance company in the USA might be required under HIPAA to demonstrate that its client data meet records retention requirements.
User-initiated backups and restores : To avoid or recover from minor disasters, such as inadvertently deleting or overwriting the "good" versions of one or more files, the computer user—rather than an administrator—may initiate backups and restores (from not necessarily the most-recent backup) of files or folders.
See also
About backup
Backup software & services
List of backup software
List of online backup services
Glossary of backup terms
Virtual backup appliance
Related topics
Data consistency
Data degradation
Data portability
Data proliferation
Database dump
Digital preservation
Disaster recovery and business continuity auditing
Notes
References
External links
Computer data
Data management
Data security
Records management |
31219784 | https://en.wikipedia.org/wiki/Risk-based%20authentication | Risk-based authentication | In Authentication, risk-based authentication is a non-static authentication system which takes into account the profile (IP address, User-Agent HTTP header, time of access, and so on) of the agent requesting access to the system to determine the risk profile associated with that transaction. The risk profile is then used to determine the complexity of the challenge. Higher risk profiles leads to stronger challenges, whereas a static username/password may suffice for lower-risk profiles. Risk-based implementation allows the application to challenge the user for additional credentials only when the risk level is appropriate.
The point is that user validation accuracy is improved without inconveniencing a user and risk-based authentication is used by major companies.
Criticism
The system that computes the risk profile has to be diligently maintained and updated as new threats emerge. Improper configuration may lead to unauthorized access.
The user's connection profile (e.g. IP Geolocation, connection type, keystroke dynamics, user behaviour) has to be detected and used to compute the risk profile. Lack of proper detection may lead to unauthorized access.
See also
References
http://www.google.com/patents/US20050097320
Authentication methods
Computer access control
Applications of cryptography
Access control
Password authentication |
1654769 | https://en.wikipedia.org/wiki/Artificial%20intelligence%20in%20video%20games | Artificial intelligence in video games | In video games, artificial intelligence (AI) is used to generate responsive, adaptive or intelligent behaviors primarily in non-player characters (NPCs) similar to human-like intelligence. Artificial intelligence has been an integral part of video games since their inception in the 1950s. AI in video games is a distinct subfield and differs from academic AI. It serves to improve the game-player experience rather than machine learning or decision making. During the golden age of arcade video games the idea of AI opponents was largely popularized in the form of graduated difficulty levels, distinct movement patterns, and in-game events dependent on the player's input. Modern games often implement existing techniques such as pathfinding and decision trees to guide the actions of NPCs. AI is often used in mechanisms which are not immediately visible to the user, such as data mining and procedural-content generation.
However, "game AI" does not, in general, as might be thought and sometimes is depicted to be the case, mean a realization of an artificial person corresponding to an NPC, in the manner of say, the Turing test or an artificial general intelligence.
Overview
The term "game AI" is used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general, and so video game AI may often not constitute "true AI" in that such techniques do not necessarily facilitate computer learning or other standard criteria, only constituting "automated computation" or a predetermined and limited set of responses to a predetermined and limited set of inputs.
Many industries and corporate voices claim that so-called video game AI has come a long way in the sense that it has revolutionized the way humans interact with all forms of technology, although many expert researchers are skeptical of such claims, and particularly of the notion that such technologies fit the definition of "intelligence" standardly used in the cognitive sciences. Industry voices make the argument that AI has become more versatile in the way we use all technological devices for more than their intended purpose because the AI allows the technology to operate in multiple ways, allegedly developing their own personalities and carrying out complex instructions of the user.
However, people in the field of AI have argued that video game AI is not true intelligence, but an advertising buzzword used to describe computer programs that use simple sorting and matching algorithms to create the illusion of intelligent behavior while bestowing software with a misleading aura of scientific or technological complexity and advancement. Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI.
History
Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerized game of Nim made in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years before Pong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game. In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur. Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997. The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.
Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade appeared in 1974: the Taito game Speed Race (racing video game) and the Atari games Qwak (duck hunting light gun shooter) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games from 1972, Hunt the Wumpus and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.
It was during the golden age of video arcade games that the idea of AI opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pac-Man (1980) introduced AI patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AI patterns to fighting games, although the poor AI prompted the release of a second version. First Queen (1988) was a tactical action RPG which featured characters that can be controlled by the computer's AI in following the leader. The role-playing video game Dragon Quest IV (1990) introduced a "Tactics" system, where the user can adjust the AI routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Mana (1993).
Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI in an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games. Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.
The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-time strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things. The first games of the genre had notorious problems. Herzog Zwei (1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II (1992) attacked the players' base in a beeline and used numerous cheats. Later games in the genre exhibited more sophisticated AI.
Later games have used bottom-up AI methods, such as the emergent behaviour and evaluation of player actions in games like Creatures or Black & White. Façade (interactive story) was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game.
Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples include Watson, a Jeopardy!-playing computer; and the RoboCup tournament, where robots are trained to compete in soccer.
Views
Many experts complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real AI" addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience. Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.
Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become less idiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games by cheating creates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game a NPC can simply look up the position in the game's scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to cheat.
The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.
Usage
In computer simulations of board games
Computer chess
Computer shogi
Computer Go
Computer checkers
Computer Othello
Computer poker players
Akinator
Computer Arimaa
Logistello, which plays Reversi
Rog-O-Matic, which plays Rogue
Computer players of Scrabble
A variety of board games in the Computer Olympiad
General game playing
Solved games have a computer strategy which is guaranteed to be optimal, and in some cases force a win or draw.
In modern video games
Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although "scripting" (decision tree) is currently the most common means of control. These handwritten decision trees often result in "artificial stupidity" such as repetitive behavior, loss of immersion, or abnormal behavior in situations the developers did not plan for.
Pathfinding, another common use for AI, is widely seen in real-time strategy games. Pathfinding is the method for determining how to get a NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Commercial videogames often use fast and simple "grid-based pathfinding", wherein the terrain is mapped onto a rigid grid of uniform squares and a pathfinding algorithm such as A* or IDA* is applied to the grid. Instead of just a rigid grid, some games use irregular polygons and assemble a navigation mesh out of the areas of the map that NPCs can walk to. As a third method, it is sometimes convenient for developers to manually select "waypoints" that NPCs should use to navigate; the cost is that such waypoints can create unnatural-looking movement. In addition, waypoints tend to perform worse than navigation meshes in complex environments. Beyond static pathfinding, navigation is a sub-field of Game AI focusing on giving NPCs the capability to navigate in a dynamic environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation). Navigation in dynamic strategy games with large numbers of units, such as Age of Empires (1997) or Civilization V (2010), often performs poorly; units often get in the way of other units.
Rather than improve the Game AI to properly solve a difficult problem in the virtual environment, it is often more cost-effective to just modify the scenario to be more tractable. If pathfinding gets bogged down over a specific obstacle, a developer may just end up moving or deleting the obstacle. In Half-Life (1998), the pathfinding algorithm sometimes failed to find a reasonable way for all the NPCs to evade a thrown grenade; rather than allow the NPCs to attempt to bumble out of the way and risk appearing stupid, the developers instead scripted the NPCs to crouch down and cover in place in that situation.
Video game combat AI
Many contemporary video games fall under the category of action, first-person shooter, or adventure. In most of these types of games, there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human or at least appear so.
One of the more positive and efficient features found in modern-day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player were in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind. These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in the stealth genre.
Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can look for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area.
Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in the id Software game Doom, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attack each other if their cohort's attacks land too close to them. In the case of Doom, published gameplay manuals even suggest taking advantage of monster infighting in order to survive certain levels and difficulty settings.
Monte Carlo tree search method
Game AI often amounts to pathfinding and finite state machines. Pathfinding gets the AI from point A to point B, usually in the most direct way possible. State machines permit transitioning between different behaviors. The Monte Carlo tree search method provides a more engaging game experience by creating additional obstacles for the player to overcome. The MCTS consists of a tree diagram in which the AI essentially plays tic-tac-toe. Depending on the outcome, it selects a pathway yielding the next obstacle for the player. In complex video games, these trees may have more branches, provided that the player can come up with several strategies to surpass the obstacle.
Uses in games beyond NPCs
Academic AI may play a role within Game AI, outside the traditional concern of controlling NPC behavior. Georgios N. Yannakakis highlighted four potential application areas:
Player-experience modeling: Discerning the ability and emotional state of the player, so as to tailor the game appropriately. This can include dynamic game difficulty balancing, which consists in adjusting the difficulty in a video game in real-time based on the player's ability. Game AI may also help deduce player intent (such as gesture recognition).
Procedural-content generation: Creating elements of the game environment like environmental conditions, levels, and even music in an automated way. AI methods can generate new content or interactive stories.
Data mining on user behavior: This allows game designers to explore how people use the game, what parts they play most, and what causes them to stop playing, allowing developers to tune gameplay or improve monetization.
Alternate approaches to NPCs: These include changing the game set-up to enhance NPC believability and exploring social rather than individual NPC behavior.
Rather than procedural generation, some researchers have used generative adversarial networks (GANs) to create new content. In 2018 researchers at Cornwall University trained a GAN on a thousand human-created levels for DOOM (1993); following training, the neural net prototype was able to design new playable levels on its own. Similarly, researchers at the University of California prototyped a GAN to generate levels for Super Mario. In 2020 Nvidia displayed a GAN-created clone of Pac-Man; the GAN learned how to recreate the game by watching 50,000 (mostly bot-generated) playthroughs.
Cheating AI
In the context of artificial intelligence in video games, cheating refers to the programmer giving agents actions and access to information that would be unavailable to the player in the same situation. Believing that the Atari 8-bit could not compete against a human player, Chris Crawford did not fix a bug in Eastern Front (1941) that benefited the computer-controlled Russian side. Computer Gaming World in 1994 reported that "It is a well-known fact that many AIs 'cheat' (or, at least, 'fudge') in order to be able to keep up with human players".
For example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking the game engine for the player's position. Common variations include giving AIs higher speeds in racing games to catch up to the player or spawning them in advantageous positions in first-person shooters. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AI—it does not include the inhuman swiftness and precision natural to a computer—a player might call the computer's inherent advantages "cheating" if they result in the agent acting, unlike a human player. Sid Meier stated that he omitted multiplayer alliances in Civilization because he found that the computer was almost as good as humans in using them, which caused players to think that the computer was cheating. Developers say that most are honest but they dislike players erroneously complaining about "cheating" AI. In addition, humans use tactics against computers that they would not against other people.
Examples
Creatures (1996)
Creatures is an artificial life program where the user "hatches" small furry animals and teaches them how to behave. These "Norns" can talk, feed themselves, and protect themselves against vicious creatures. It was the first popular application of machine learning in an interactive simulation. Neural networks are used by the creatures to learn what to do. The game is regarded as a breakthrough in artificial life research, which aims to model the behavior of creatures interacting with their environment.
Halo: Combat Evolved (2001)
A first-person shooter where the player assumes the role of the Master Chief, battling various aliens on foot or in vehicles. Enemies use cover very wisely, and employ suppressing fire and grenades. The squad situation affects the individuals, so certain enemies flee when their leader dies. A lot of attention is paid to the little details, with enemies notably throwing back grenades or team-members responding to you bothering them. The underlying "behavior tree" technology has become very popular in the games industry (especially since Halo 2).
F.E.A.R. (2005)
A psychological horror first-person shooter in which the player characters engages a battalion of cloned super-soldiers, robots and paranormal creatures. The AI uses a planner to generate context-sensitive behaviors, the first time in a mainstream game. This technology is still used as a reference for many studios. The Replicas are capable of utilizing the game environment to their advantage, such as overturning tables and shelves to create cover, opening doors, crashing through windows, or even noticing (and alerting the rest of their comrades to) the player's flashlight. In addition, the AI is also capable of performing flanking maneuvers, using suppressing fire, throwing grenades to flush the player out of cover, and even playing dead. Most of the aforementioned actions (in particular the flanking) is the result of emergent behavior.
S.T.A.L.K.E.R. series (2007–)
A first-person shooter survival horror game where the player must face man-made experiments, military soldiers, and mercenaries known as Stalkers. The various encountered enemies (if the difficulty level is set to its highest) use combat tactics and behaviors such as healing wounded allies, giving orders, out-flanking the player or using weapons with pinpoint accuracy.
StarCraft II (2010)
A real-time strategy game where a player takes control of one of three factions in a 1v1, 2v2, or 3v3 battle arena. The player must defeat their opponents by destroying all their units and bases. This is accomplished by creating units that are effective at countering your opponents' units. Players can play against multiple different levels of AI difficulty ranging from very easy to Cheater 3 (insane). The AI is able to cheat at the difficulty Cheater 1 (vision), where it can see units and bases when a player in the same situation could not. Cheater 2 gives the AI extra resources, while Cheater 3 gives an extensive advantage over its opponent.
See also
Applications of artificial intelligence
Behavior model
Machine learning in video games
Video game bot
Simulated reality
Utility system - a robust technique for decision making in video games
Kynapse – game AI middleware, specializing in path finding and spatial reasoning
AiLive – A suite of game AI middleware
xaitment – graphical game AI software
Lists
List of emerging technologies
List of game AI middleware
Outline of artificial intelligence
References
Bibliography
Bogost, Ian (2017). "'Artificial Intelligence' Has Become Meaningless."
Bourg; Seemann (2004). AI for Game Developers. O'Reilly & Associates. .
Buckland (2002). AI Techniques for Game Programming. Muska & Lipman. .
Buckland (2004). Programming Game AI By Example. Wordware Publishing. .
Champandard (2003). AI Game Development. New Riders. .
Eaton, Eric et al. (2015). "Who speaks for AI?"
Funge (1999). AI for Animation and Games: A Cognitive Modeling Approach. A K Peters. .
Funge (2004). Artificial Intelligence for Computer Games: An Introduction. A K Peters. .
Kaplan, Jerry (2017). "AI's PR Problem."
Millington (2005). Artificial Intelligence for Games . Morgan Kaufmann. .
Schwab (2004). AI Game Engine Programming. Charles River Media. .
Smed and Hakonen (2006). Algorithms and Networking for Computer Games. John Wiley & Sons. .
External links
Special Interest Group on Artificial Intelligence @IGDA
AI Game Programming Wisdom on aiwisdom.com
Georgios N. Yannakakis and Julian Togelius
Video game development |
601399 | https://en.wikipedia.org/wiki/Display%20resolution | Display resolution | The display resolution or display modes of a digital television, computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode ray tube (CRT) displays, flat-panel displays (including liquid-crystal displays) and projection displays using fixed picture-element (pixel) arrays.
It is usually quoted as , with the units in pixels: for example, means the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as "ten twenty-four by seven sixty-eight" or "ten twenty-four by seven six eight".
One use of the term display resolution applies to fixed-pixel-array displays such as plasma display panels (PDP), liquid-crystal displays (LCD), Digital Light Processing (DLP) projectors, OLED displays, and similar technologies, and is simply the physical number of columns and rows of pixels creating the display (e.g. ). A consequence of having a fixed-grid display is that, for multi-format video inputs, all displays need a "scaling engine" (a digital video processor that includes a memory array) to match the incoming picture format to the display.
For device displays such as phones, tablets, monitors and televisions, the use of the term display resolution as defined above is a misnomer, though common. The term display resolution is usually used to mean pixel dimensions, the maximum number of pixels in each dimension (e.g. ), which does not tell anything about the pixel density of the display on which the image is actually formed: resolution properly refers to the pixel density, the number of pixels per unit distance or area, not the total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI). In analog measurement, if the screen is 10 inches high, then the horizontal resolution is measured across a square 10 inches wide. For television standards, this is typically stated as "lines horizontal resolution, per picture height"; for example, analog NTSC TVs can typically display about 340 lines of "per picture height" horizontal resolution from over-the-air sources, which is equivalent to about 440 total lines of actual picture information from left edge to right edge.
Background
Some commentators also use display resolution to indicate a range of input formats that the display's input electronics will accept and often include formats greater than the screen's native grid size even though they have to be down-scaled to match the screen's parameters (e.g. accepting a input on a display with a native pixel array). In the case of television inputs, many manufacturers will take the input and zoom it out to "overscan" the display by as much as 5% so input resolution is not necessarily display resolution.
The eye's perception of display resolution can be affected by a number of factors see image resolution and optical resolution. One factor is the display screen's rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen's physical aspect ratio and the individual pixels' aspect ratio may not necessarily be the same. An array of on a 16:9 display has square pixels, but an array of on a 16:9 display has oblong pixels.
An example of pixel shape affecting "resolution" or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or "sharper". However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to "fix" the non-native resolution input into the display's native resolution output.
While some CRT-based displays may use digital video processing that involves image scaling using memory arrays, ultimately "display resolution" in CRT-type displays is affected by different parameters such as spot size and focus, astigmatic effects in the display corners, the color phosphor pitch shadow mask (such as Trinitron) in color displays, and the video bandwidth.
Aspects
Overscan and underscan
Most television display manufacturers "overscan" the pictures on their displays (CRTs and PDPs, LCDs etc.), so that the effective on-screen picture may be reduced from (480) to (450), for example. The size of the invisible area somewhat depends on the display device. Some HD televisions do this as well, to a similar extent.
Computer displays including projectors generally do not overscan although many models (particularly CRT displays) allow it. CRT displays tend to be underscanned in stock configurations, to compensate for the increasing distortions at the corners.
Interlaced versus progressive scan
Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon.
The European Broadcasting Union has argued against interlaced video in production and broadcasting. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames. Despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV, DVB, and ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.
Progressive scanning (alternatively referred to as noninterlaced scanning) is a format of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video used in traditional analog television systems where only the odd lines, then the even lines of each frame (each image called a video field) are drawn alternately, so that only half the number of actual image frames are used to produce video.
Televisions
Current standards
Televisions are of the following resolutions:
Standard-definition television (SDTV):
480i (NTSC-compatible digital standard employing two interlaced fields of 243 lines each)
576i (PAL-compatible digital standard employing two interlaced fields of 288 lines each)
Enhanced-definition television (EDTV):
480p ( progressive scan)
576p ( progressive scan)
High-definition television (HDTV):
720p ( progressive scan)
1080i ( split into two interlaced fields of 540 lines)
1080p ( progressive scan)
Ultra-high-definition television (UHDTV):
4K UHD ( progressive scan)
8K UHD ( progressive scan)
Computer monitors
Computer monitors have traditionally possessed higher resolutions than most televisions.
Evolution of standards
Many personal computers introduced in the late 1970s and the 1980s were designed to use television receivers as their display devices, making the resolutions dependent on the television standards in use, including PAL and NTSC. Picture sizes were usually limited to ensure the visibility of all the pixels in the major television standards and the broad range of television sets with varying amounts of over scan. The actual drawable picture area was, therefore, somewhat smaller than the whole screen, and was usually surrounded by a static-colored border (see image to right). Also, the interlace scanning was usually omitted in order to provide more stability to the picture, effectively halving the vertical resolution in progress. , and on NTSC were relatively common resolutions in the era (224, 240 or 256 scanlines were also common). In the IBM PC world, these resolutions came to be used by 16-color EGA video cards.
One of the drawbacks of using a classic television is that the computer display resolution is higher than the television could decode. Chroma resolution for NTSC/PAL televisions are bandwidth-limited to a maximum 1.5MHz, or approximately 160 pixels wide, which led to blurring of the color for 320- or 640-wide signals, and made text difficult to read (see example image below). Many users upgraded to higher-quality televisions with S-Video or RGBI inputs that helped eliminate chroma blur and produce more legible displays. The earliest, lowest cost solution to the chroma problem was offered in the Atari 2600 Video Computer System and the Apple II+, both of which offered the option to disable the color and view a legacy black-and-white signal. On the Commodore 64, the GEOS mirrored the Mac OS method of using black-and-white to improve readability.
The resolution ( with borders disabled) was first introduced by home computers such as the Commodore Amiga and, later, Atari Falcon. These computers used interlace to boost the maximum vertical resolution. These modes were only suited to graphics or gaming, as the flickering interlace made reading text in word processor, database, or spreadsheet software difficult. (Modern game consoles solve this problem by pre-filtering the 480i video to a lower resolution. For example, Final Fantasy XII suffers from flicker when the filter is turned off, but stabilizes once filtering is restored. The computers of the 1980s lacked sufficient power to run similar filtering software.)
The advantage of a overscanned computer was an easy interface with interlaced TV production, leading to the development of Newtek's Video Toaster. This device allowed Amigas to be used for CGI creation in various news departments (example: weather overlays), drama programs such as NBC's seaQuest and The WB's Babylon 5.
In the PC world, the IBM PS/2 VGA (multi-color) on-board graphics chips used a non-interlaced (progressive) 640 × 480 × 16 color resolution that was easier to read and thus more useful for office work. It was the standard resolution from 1990 to around 1996. The standard resolution was until around 2000. Microsoft Windows XP, released in 2001, was designed to run at minimum, although it is possible to select the original in the Advanced Settings window.
Programs designed to mimic older hardware such as Atari, Sega, or Nintendo game consoles (emulators) when attached to multiscan CRTs, routinely use much lower resolutions, such as or for greater authenticity, though other emulators have taken advantage of pixelation recognition on circle, square, triangle and other geometric features on a lesser resolution for a more scaled vector rendering. Some emulators, at higher resolutions, can even mimic the aperture grille and shadow masks of CRT monitors.
In 2002, eXtended Graphics Array was the most common display resolution. Many web sites and multimedia products were re-designed from the previous format to the layouts optimized for .
The availability of inexpensive LCD monitors made the aspect ratio resolution of more popular for desktop usage during the first decade of the 21st century. Many computer users including CAD users, graphic artists and video game players ran their computers at resolution (UXGA) or higher such as QXGA if they had the necessary equipment. Other available resolutions included oversize aspects like SXGA+ and wide aspects like WXGA, WXGA+, WSXGA+, and WUXGA; monitors built to the 720p and 1080p standard were also not unusual among home media and video game players, due to the perfect screen compatibility with movie and video game releases. A new more-than-HD resolution of WQXGA was released in 30-inch LCD monitors in 2007.
In 2010, 27-inch LCD monitors with the resolution were released by multiple manufacturers, and in 2012, Apple introduced a display on the MacBook Pro. Panels for professional environments, such as medical use and air traffic control, support resolutions up to (or, more relevant for control rooms, pixels).
Common display resolutions
The following table lists the usage share of display resolutions from two sources, as of June 2020. The numbers are not representative of computer users in general.
In recent years the 16:9 aspect ratio has become more common in notebook displays. (HD) has become popular for most low-cost notebooks, while (FHD) and higher resolutions are available for more premium notebooks.
When a computer display resolution is set higher than the physical screen resolution (native resolution), some video drivers make the virtual screen scrollable over the physical screen thus realizing a two dimensional virtual desktop with its viewport. Most LCD manufacturers do make note of the panel's native resolution as working in a non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to make the image fit (when using DVI) or insufficient sampling of the analog signal (when using VGA connector). Few CRT manufacturers will quote the true native resolution, because CRTs are analog in nature and can vary their display from as low as 320 × 200 (emulation of older computers or game consoles) to as high as the internal board will allow, or the image becomes too detailed for the vacuum tube to recreate (i.e., analog blur). Thus, CRTs provide a variability in resolution that fixed resolution LCDs cannot provide.
Film industry
As far as digital cinematography is concerned, video resolution standards depend first on the frames' aspect ratio in the film stock (which is usually scanned for digital intermediate post-production) and then on the actual points' count. Although there is not a unique set of standardized sizes, it is commonplace within the motion picture industry to refer to "nK" image "quality", where n is a (small, usually even) integer number which translates into a set of actual resolutions, depending on the film format. As a reference consider that, for a 4:3 (around 1.33:1) aspect ratio which a film frame (no matter what is its format) is expected to horizontally fit in, n is the multiplier of 1024 such that the horizontal resolution is exactly 1024•n points. For example, 2K reference resolution is pixels, whereas 4K reference resolution is pixels. Nevertheless, 2K may also refer to resolutions like (full-aperture), (HDTV, 16:9 aspect ratio) or pixels (Cinemascope, 2.35:1 aspect ratio). It is also worth noting that while a frame resolution may be, for example, 3:2 ( NTSC), that is not what you will see on-screen (i.e. 4:3 or 16:9 depending on the intended aspect ratio of the original material).
See also
Graphics display resolution
Computer display standard
Display aspect ratio
Display size
Ultrawide formats
Pixel density of computer displays – PPI (for example, a 20-inch 1680 × 1050 screen has a PPI of 99.06)
Resolution independence
Video scaler
Widescreen
References
Digital imaging
Display technology
History of television
Television technology
Television terminology
Video signal |
410054 | https://en.wikipedia.org/wiki/Label%20%28Mac%20OS%29 | Label (Mac OS) | In Apple's Macintosh operating systems, labels are a type of seven distinct colored and named parameters of metadata that can be attributed to items (files, folders and disks) in the filesystem. Labels were introduced in Macintosh System 7, released in 1991, and they were an improvement of the ability to colorize items in earlier versions of the Finder. Labels remained a feature of the Macintosh operating system through the end of Mac OS 9 in late 2001, but they were omitted from Mac OS X versions 10.0 to 10.2, before being reintroduced in version 10.3 in 2003, though not without criticism. During the short time period when Mac OS X lacked labels, third-party software replicated the feature.
In classic Mac OS
In classic Mac OS versions 7 through 9, applying a label to an item causes the item's icon to be tinted in that color when using a color computer monitor (as opposed to the black-and-white monitors of early Macs), and labels can be used as a search and sorting criterion. There is a choice of seven colors because three bits are reserved for the label color: 001 through 111, and 000 for no label. The names of the colors can be changed to represent categories assigned to the label colors. Both label colors and names can be customized in the classic Mac OS systems; however, Mac OS 8 and 9 provided this functionality through the Labels tab in the Finder Preferences dialog, while System 7 provided a separate Labels control panel. Labels in Mac OS 9 and earlier, once customized, were specific to an individual install; booting into another install, be it on another Mac or different disk would show different colors and names unless set identically. A colorless label could be produced by changing a label's color to black or white.
In Mac OS X and later
Mac OS X versions 10.3 to 10.8 apply the label color to the background of item names, except when an item is selected in column view, which changes the item name to the standard highlight color except for a label-colored dot after the name. Beginning in OS X 10.9, the label-colored background of item names is replaced with a small label-colored dot, and becomes a kind of tag.
Relation to tags
The Mac operating system has allowed users to assign multiple arbitrary tags as extended file attributes to any item ever since OS X 10.9 was released in 2013. These tags coexist with the legacy label system for backward compatibility, so that multiple colored (or colorless) tags can be added to a single item, but only the last colored tag applied to an item will set the legacy label that will be seen when viewing the item in the older operating systems. Labeled items that were created in the older operating systems will superficially seem to be tagged in OS X 10.9 and later even though they are only labeled and lack the newer tag extended file attributes (until they are edited in the new system). Since label colors can be changed in classic Mac OS but are standardized and unchangeable in the newer operating systems, someone who wants to synchronize the label colors between a classic and modern system can change the label colors in classic Mac OS to match the newer system.
See also
References
MacOS |
12557198 | https://en.wikipedia.org/wiki/20th%20CBRNE%20Command | 20th CBRNE Command | The 20th CBRNE Command (Chemical, Biological, Radiological, Nuclear and high- yield Explosives or CBRNE) is the United States Army's Chemical, Biological, Nuclear, Radiological and high-yield explosives headquarters.
Command Overview
The 20th CBRNE Command (CBRNE—Chemical, Biological, Radiological, Nuclear and Explosives), also called CBRNE Command, was activated 16 Oct. 2004, by U.S. Army Forces Command to provide specialized CBRNE response in support of military operations and civil authorities.
The U.S. Army Forces analyzed threats facing the US in both domestic and international contexts, and argued for the need to realign and expand the Army's CBRNE assets and capabilities. The CBRNE Command consolidates its unique assets under a single operational headquarters located in the Edgewood Area of Aberdeen Proving Ground, Maryland.
CBRNE operations detect, identify, assess, render-safe, dismantle, transfer, and dispose of unexploded ordnance, improvised explosive devices and other CBRNE hazards. These operations also include decontaminating personnel and property exposed to CBRNE materials during response.
By consolidating these assets under one headquarters, the Army has more effective command and control of its specialized CBRNE elements. This alignment also eliminates operational redundancies and allows more efficient management and employment of these unique—but limited—resources.
The 20th CBRNE Command gives the Army and the nation a scalable response capability with the flexibility to operate in a variety of environments, from urban areas to austere sites across the spectrum of military operations. Subordinate elements include the 48th Chemical Brigade, the 52d Ordnance Group (Explosive Ordnance Disposal), the 71st Ordnance Group (EOD) and the CBRNE Analytical and Remediation Activity, known as CARA. These organizations support Combatant Commands and the Homeland in operations and contingencies throughout the world. At any time, 20 percent of the command is deployed in support of Operation Enduring Freedom in Afghanistan.
When called upon, the command may deploy and serve as a headquarters for the Joint Task Force for Elimination of Weapons of Mass Destruction (JTF-E), as directed by the 2006 Quadrennial Defense Review.
The CBRNE Command leverages sanctuary reach back, linking subject matter experts in America's defense, scientific and technological communities with deployed elements and first responders.
When fully operational, the command will possess a deployable chemical and biological analytical capability to provide timely, accurate analysis of unknown samples and a near real-time chemical-/biological- monitoring platform. This minimizes risk to on-scene personnel and affords leaders timely information to issue guidance and make decisions.
Current structure
20th CBRNE Command Structure, Aberdeen Proving Ground (MD)
48th Chemical Brigade, Fort Hood (TX)
52nd Ordnance Group (EOD), Fort Campbell (KY)
71st Ordnance Group (EOD), Fort Carson (CO)
CBRNE Analytical and Remediation Activity-East (CARA), Aberdeen Proving Ground (MD)
CBRNE Analytical and Remediation Activity-West (CARA), Redstone Arsenal (AL)
Training Readiness Authority:
111th Ordnance Group (EOD) (Alabama Army National Guard), Opelika (AL)
Consequence Management Unit (CMU) (Army Reserve), Aberdeen Proving Ground (MD)
History
On 16 October 2004 the 20th CBRNE Command was activated at Aberdeen Proving Ground, Md., as a major subordinate command under US Army Forces Command with the mission of providing an operational headquarters to command and control Army CBRNE operations and serve as the primary Army force provider of specialized CBRNE capabilities.
Assigned to the new headquarters were the 52d EOD Group and its five EOD battalions, and the 22d Chemical Battalion (Technical Escort), formerly known as the Technical Escort Unit. In June 2005, the 71st EOD Group was activated at Fort Carson, Colo. By June 2006, three new EOD battalions were assigned to the 71st EOD Group and the 110th Chemical Battalion (TE) was activated at Ft. Lewis, Wash.
The publishing of the 2006 Quadrennial Defense Review required further alterations to the 20th Support Command’s structure, organization, manning and equipment in order to meet its new requirement to stand-up and serve as the headquarters for the Joint Task Force for Elimination of WMD (JTF-E).
In May 2007, the establishment of the CBRNE Analytical and Remediation Activity (CARA) with four remediation response teams, multiple mobile exploitation laboratories, and an aviation section, marked a key milestone in the Command’s ability to provide the Army with the full spectrum of specialized CBRNE forces and capabilities.
In September 2007, the final major organizational piece was completed when the 48th Chemical Brigade was activated and assumed command of three Chemical Battalions and the two Chemical Technical Escort Battalions.
In addition to the Command’s organic assets the 20th CBRNE Command has Training Readiness Authority over the USAR’s 111th EOD Group and in 2008 the Command assumed operational control of the USAR Consequence Management Unit and administrative control of the 1st and 9th Area Medical Laboratories.
The CBRNE Command also executes command and control of five WMD-Coordination Elements that deploy to augment combatant commanders or lead federal agencies with their significant CBRNE and combating-WMD expertise and communications assets. The Command’s four Nuclear Disablement Teams provide the final piece of the puzzle and the Command’s ability to execute full-spectrum counter-CBRNE and combating-WMD operations at home and abroad.
In 2008, elements of the Command deployed in support of Operation Iraqi Freedom for sensitive missions leveraging the unique capabilities of this command. The CBRNE Command has deployed over 20 units and headquarters per year in support of OIF and OEF for counter-IED operations, and CBRN force protection, exploitation, and elimination operations and at any time more than 20 percent of the Command is deployed abroad in support of OIF and OEF.
The command maintains a robust rapid response force for threats in the homeland, and routinely supports the President, other dignitaries, and national special security events. The command now stands with two EOD Groups, one Chemical Brigade, 12 Battalions, more than 65 Companies, and one direct reporting activity. The 20th CBRNE Command continues to transform to meet current and future challenges at home and abroad.
"Liberty We Defend"
Training
The CBRNE Command also trains foreign governments in CBRN detection and response. The Command trained the Armed Forces of the Philippines at Camp Aguinaldo in 2014. During the training, vehicles carrying explosives were prepared as a test scenario. The AFP bomb squad defused the simulated explosives and the CBRNE Command neutralized the chemical threat. In another exercise, the U.S. Army trained the Kenya Rapid Deployment Capability to respond to HAZMAT/CBRN incidents using SCBA gear.
Commanders
BG Walter L. Davis (October 2004–August 2005)
BG Kevin R. Wendel (September 2005–June 2008)
BG Jeffrey J. Snow (June 2008–May 2010)
MG Leslie C. Smith (July 2010–May 2013)
BG JB Burton (May 2013–May 2015)
BG William E. King IV (May 2015–July 2017)
BG James E. Bonner (July 2017–June 2020)
MG Antonio V. Munera (June 2020-Present)
Deputy Commanders
COL Gene King
COL Paul Plemmons
COL Raymond Van Pelt
COL Thomas Langowski
COL Kyle Nordmeyer
COL. Marty Muchow
Command Sergeants Major
CSM Marvin Womack, Sr., 2005 to 2009
CSM Ronald E. Orosz, 2009 to 2011
CSM David Puig, 2012 to 2014
CSM Harold E. Dunn IV, 2014 to 2016
CSM Kenneth M. Graham, 2016 to 2018
CSM Henney M. Hodgkins 2018 to 2021
CSM Jorge Arzabala Jr. 2021 to present
See also
Bioterrorism
References
Further reading
Ben Sheppard, 'Chemical reactions,' Jane's Defence Weekly, 4 February 2009, p. 28–31
External links
20th CBRNE Command Home Page – Official Site
News on the 20th annual competition
CBRNE response team from 20th CBRN deal with washed up munitions
Support Commands of the United States Army
Chemical units and formations of the United States Army |
610697 | https://en.wikipedia.org/wiki/Yield | Yield | Yield may refer to:
Measures of output/function
Computer science
Yield (multithreading) is an action that occurs in a computer program during multithreading
See generator (computer programming)
Physics/chemistry
Yield (chemistry), the amount of product obtained in a chemical reaction
The arrow symbol in a chemical equation
Yield (engineering), yield strength of a material as defined in engineering and material science
Fission product yield
Nuclear weapon yield
Earth science
Crop yield
Yield (wine)
Specific yield, a measure of aquifer capacity
Production/manufacturing
Yield (casting)
Throughput yield, a manufacturing evaluation method
A measure of functioning devices in semiconductor testing, see Semiconductor device fabrication#Device test
The number of servings provided by a recipe and hulk
Finance
Yield (finance), a rate of return for a security
Dividend yield and earnings yield, measures of dividends paid on stock
Other uses
Yield (college admissions), a statistic describing what percent of applicants choose to enroll
Yield (album), by Pearl Jam
Yield sign, a traffic sign
Yield, a feature of a coroutine in computer programming
Yield, an element of the TV series The Amazing Race |
26861582 | https://en.wikipedia.org/wiki/MiKandi | MiKandi | MiKandi (pronounced "my candy") is a mobile adult software applications store. Developed by MiKandi LLC, a Seattle-based company, MiKandi is the world's first and largest mobile porn app store. The store seeks to get around restrictions placed on adult content by Apple Inc. by releasing the third-party application store on Google Android’s open source operating system and offering an HTML5 web-based application for all touch devices.
On December 9, 2009 MiKandi reported that the client software had been downloaded over 80,000 times onto Android devices since their November 29, 2009 launch. By 2012, the MiKandi application had been installed on 2 million devices. The native MiKandi application is only available for the Android operating system, but in 2012, the company released an HTML5-based application to stream adult entertainment videos to all touch devices. MiKandi applications are aimed at an adult demographic and contain explicit adult content.
As of mid-2013, MiKandi had been installed on 3.8 million Android devices and had a catalog of over 7,000 adult applications.
Applications
In an interview with Northwest Cable News, Jennifer McEwen, a co-founder of MiKandi, noted that all applications would be accepted as long as they were legal. Although MiKandi has the potential to have high quality adult applications, many early applications have been criticized as simply packages of pictures and short videos.
Since their launch, prominent adult brands have launched official uncensored applications in the app market. On November 23, 2010, Gamelink, a subsidiary of Private Media Group, launched its official application in the MiKandi App Market. Early adopter, Pink Visual, launched their first application in March 2010, followed by a second application, iTouch Her, in January 2011. Online sex and swinger personals community website Adult FriendFinder also released an official application in the market in February 2011.
On March 24, 2011, the controversial application iBoobs was released in the MiKandi App Market. Banned from the Apple App Store in 2008, the full uncensored application was briefly distributed in the Android Market before it was removed in 2011. Although the free, censored version of the app is still available in the Android Market, the developers of the application reported that AdMob had stopped serving ads to the ad-supported application. In 2012, Hustler joined MiKandi and released 2 official Hustler applications in the adult app market.
Features and services
MiKandi App Store
The MiKandi App Store is a third party Android application and is available on Android devices worldwide. It is the first and largest adult app store for mobile devices.
MiKandi Gold
MiKandi launched a number of updates to the app store in 2010. The most notable update was on Thanksgiving Day 2010, in which the app store released a complete design overhaul, and introduced paid app support using the app market’s own virtual currency, MiKandi Gold. During this time, MiKandi released a glimpse at a new product called MiKandi Theater which can be accessed in the app market. The new design also indicates that the app market will allow customers to earn virtual currency by completing an offer wall.
KandiBilling, in-app billing system
On March 28, 2011 MiKandi released full in-app billing support, dubbed KandiBilling, to Android developers.
MiKandi Theater
One year after releasing KandiBilling, MiKandi launched a major product update to MiKandi Theater. The company collaborated with adult studios Elegant Angel, Wasteland, Gamma Entertainment, Pink TV, Burning Angel, and Cocky Boys to stream hundreds of adult entertainment video clips to MiKandi's Android users. Less than 2 months later, the company released an HTML5 version of MiKandi Theater to support all non-Android touch devices. MiKandi announced at that time that its user base had grown to 2 million.
Controversy
During a 2010 iPhone 4.0 OS event, Apple CEO Steve Jobs noted that a “porn store” existed on Android — referring to MiKandi without using its name, and the app market was downloaded approximately 10,000 times in 12 hours after Jobs’ statement. MiKandi received a cease-and-desist request from Apple in March 2011 for the market’s use of the term ‘app store’. MiKandi has since changed all terms on its websites and mobile client to read ‘app market’ and now bills itself as “The World’s First App Market for Adults.” Co-founder and CEO Jesse Adams suggests that the company may support Microsoft’s challenge to Apple’s trademark. Says Adams, “It’s not worth it for us to fight Apple’s legal team over this by ourselves. Maybe we can file an amicus brief to Microsoft’s case.”
References
External links
Android (operating system) software
Mobile software |
1949223 | https://en.wikipedia.org/wiki/The%20Fourth%20Dimension%20%28company%29 | The Fourth Dimension (company) | The Fourth Dimension (4D) was a major video game publisher for the BBC Micro, Acorn Electron, Acorn Archimedes and RiscPC between 1989 and 1998. Previously, The Fourth Dimension had been known as Impact Software, which specialised mainly in BBC Micro games. Some of 4D's staff had worked for Superior Software. Notable release included Cyber Chess, Stunt Racer 2000, Galactic Dan and Chocks Away.
History
In 1989, The Fourth Dimension was founded by brothers Mark and Steve Botterill in Sheffield. Originally it was called Impact Software. It released software for Acorn's 8-bit and 32-bit computer ranges.
Following the demise of Acorn and the subsequent contraction of the RISC OS games market, The Fourth Dimension brand and rights to the software back-catalogue was acquired by CJE Micro's.
In 2002, the publisher backed a scheme subsidising the cost of hardware for developers.
In 2004, CJE Micro's sold the rights to the software to APDL, the Archimedes Public Domain Library.
Market focus
Although the Archimedes market was relatively small, it had a fast 32-bit RISC processor with a slim accelerated pipeline that encouraged fast graphics operations. Certain of 4D's games anticipate the 3D, first-person viewpoint style of graphics that was becoming popular on the much larger PC market at the same time. For example, E-type is a car racing game; Chocks Away is an air combat game with a two-player dogfight mode; and Galactic Dan is a primitive 1992 first-person shooter with a pre-Wolfenstein 3D graphics style, combining a 3D Maze look with ray-traced sprites.
List of published games
Apocalypse (Gordon J. Key, 1990)
Arcade Soccer (Peter Gillett, 1989)
Birds of War
Black Angel (Gordon J. Key, 1992)
Boogie Buggy (Coin-Age, 1991)
Break 147 & Superpool (Gordon J. Key, 1991)
Carnage Inc. (Coding: Chris & Stuart Fludger; Graphics: Andrew Jackson; Screen & puzzle design: Chris Fludger; Vector Graphics: Stuart Fludger 1993)
Cataclysm (David Postlethwaite)
Chocks Away (Andrew Hutchings, 1990)
Chocks Away Extra Missions (Andrew Hutchings, 1991)
Chopper Force (Andrew Norris, 1992)
Cyber Chess (William Tunstall-Pedoe, 1993)
Custom McCoy (4 games chosen by buyer from list of appliable games)
Demon's Lair (Dr. Kevin Martin)
Drop Ship (Andrew Catling, 1990)
The Dungeon (Martin Dennett & John Parker, 1993)
Enter The Realm (Audio, Visual & Code: Graeme Richardson, Music: Peter Gillett, 1991)
E-Type (Gordon J. Key, 1989)
E-Type Track Designer (Gordon J. Key, 1989)
E-Type Compendium (Gordon J. Key)
E-Type 2 (Gordon J. Key)
The Exotic Adventures of Sylvia Layne
Galactic Dan (Coding: Ian Holmes; Graphics: James Davidson, 1992)
Grievous Bodily 'ARM (Software Engineering: Simon Hallam; Graphics: Sophie Neal; Music: The Byford Brothers, 1991)
Haunted House (Gordon J. Key)
Holed Out! (Gordon J. Key, 1989)
Holed Out Designer (Gordon J. Key, 1990)
Holed Out Extra Courses Vol. 1 & 2 (Gordon J. Key)
Holed Out Compendium (Gordon J. Key, 1991)
Inertia (David Postlethwaite, 1990)
Logic Mania - Gloop, Blindfold, Atomix, Tilt (Robin Jubber, Dave Williams, 1996)
Man At Arms (Coding: Matthew Atkinson; Music: Peter Gillett, 1990)
Nevryon (Coding & Graphics: Graeme Richardson; Music: Peter Gillett, 1990)
The Olympics (1990)
Pandora's Box (Coding: Chris & Stuart Fludger; Graphics: Andrew Jackson; Additional Graphics: Chris Fludger; Title Music: Simon Carless, 1992)
Powerband (Gordon J. Key, 1990)
Pysanki (Coding: Matthew Atkinson; Music: David Postlethwaite, 1990)
Quazer (1990) (Published as Impact Software)
Real McCoy Compendium series:
Real McCoy 1 - Arcade Soccer, Quazer, U.I.M and White Magic (1990)
Real McCoy 2 - Apocalypse, Holed Out!, Inertia and The Olympics (1991)
Real McCoy 3 - Drop Ship, Nevryon, Powerband and The Wimp Game.
Real McCoy 4 - Cataclysm, Galactic Dan, Grievous Bodily 'ARM and X-Fire.
Real McCoy 5 - Anti-Grav, Chopper Force, Demon's Lair and Pandora's Box.
Real McCoy 6 - Bloodlust, Carnage Inc., Silverball and Technodream.
Saloon Cars (Andy Swain 1991)
Saloon Cars Deluxe (Andy Swain, 1992)
Saloon Cars Deluxe Extra Courses Vol. 1 (Andy Swain)
Spobbleoid Fantasy (Graeme Richardson, 1995)
Stunt Racer 2000 (Fednet aka. Andrew Hutchings and Tim Parry, 1993)
The Time Machine (Gordon J. Key)
U.I.M: Ultra Intelligent Machine
Virtual Golf (Gordon J. Key)
White Magic (John Whigham, 1989)
White Magic 2 (John Whigham, 1989)
The Wimp Game (Thomas E.H. Nunns, 1990)
X-Fire (The Soft Lads aka. Mark Neves, Martin Portman and Paul Carrol, 1992)
References
External links
The APDL homepage
4D at CJE Micro's
Defunct video game companies of the United Kingdom |
35251434 | https://en.wikipedia.org/wiki/Analogue%3A%20A%20Hate%20Story | Analogue: A Hate Story | Analogue: A Hate Story (Korean: ) is a visual novel created by independent designer and visual novelist Christine Love. It was created with the Ren'Py engine, and was first released for download on the author's website in February 2012. A sequel set centuries after Love's earlier work, Digital: A Love Story (2010), Analogue revolves around an unnamed investigator, who is tasked with discovering the reason for an interstellar ship's disappearance once it reappears after 600 years. The game's themes focus similarly around human/computer interaction, interpersonal relationships, and LGBT issues; but focus primarily on "transhumanism, traditional marriage, loneliness and cosplay."
Analogue has a word count of about 59,000.
Gameplay
Analogue: A Hate Story is a visual novel featuring semi-static manga-style character images, and focused on reading text logs. Using the mouse and keyboard, the player interacts with the Mugunghwas main computer to read log entries, communicate with the AIs, and occasionally enter commands directly into the vessel's computer system. At any time in the game, the player can save their game, adjust options, etc.
The main user interface allows the player to read through various diaries and letters that reveal the game's backstory and insight into its many (deceased) characters. For the most part, navigating this interface is similar to navigating a basic e-mail system, in that messages are chronologically arranged and searchable. They are grouped in usually numbered "blocks", released to the player by *Hyun-ae or *Mute throughout the game. For the most part, the AIs release blocks "out of order", or do not release all entries in a block, forcing the player to assemble the timeline of events out of what clues they have, and draw certain conclusions independently until (or if) the AIs can be convinced to be more forthcoming. In most cases, the player can, after reading a log entry, show its content to the currently active AI. This is the primary process by which additional information and message blocks are revealed. Players can also type in an entry's alphanumeric ID in the main log menu to obtain it directly, as long as its corresponding block is decrypted.
Communication with *Hyun-ae and *Mute is limited to choosing responses to yes-no questions. In the game, *Hyun-ae explains that the ship's disrepair may have led to the language parsing systems to malfunction, forcing her to put the interface together from scratch. Though *Hyun-ae and *Mute almost never communicate directly, the player can act as a go-between, taking a list of direct questions from *Mute to *Hyun-ae. This is a major turning point in the game, as the player not only receives answers to the questions, but has occasional opportunities to voice a third opinion on the events that led to the Mugunghwa's current state. The player can also access the Mugunghwa's override terminal, which can be used to decrypt data log blocks, switch between AI, change costumes for *Hyun-ae, adjust the behavior of some ship systems (a key aspect for the meltdown sequence), and more. The override terminal works like a basic text parser system similar to Unix shell commands, accepting only a very limited vocabulary of instructions that must be typed directly and correctly.
Due to the branching nature of the story, the game must be played more than once to unlock all logs to complete the game, as it is impossible to reveal all log entries and information from the AIs in one playthrough. A log system separate from the game's save files displays all discovered logs from all playthroughs, effectively tracking the player's overall completion.
Plot
Setting and characters
Set several thousand years in the future, Analogue revolves around the Mugunghwa (), a generation ship that lost contact with Earth some 600 years prior to the events of the game. For reasons initially unclear, society aboard the ship had degraded from that of modern, 21st Century South Korea, to the intensely patriarchal culture of the medieval Joseon Dynasty. In the process, the ship's clocks were reset to year 1, and the colonists began using Hanja characters to read and write. The reasons for why such a cultural shift has occurred is lost to time, leaving the player to formulate the cause on their own. Over the three centuries after the shift, the ship's birth rates began to gradually decline, to below the "replacement rate" of noble families. By year 322, the ship inexplicably went dark, falling into a state of severe disrepair.
In Analogues present, 622 years later, the Mugunghwa is discovered in orbit above Antares B, a star system en route to its destination. A friend of the protagonist's, a dispatch officer, is the one who discovers the ship on their radar; this catches the attention of the Saeju Colony Historical Society (which suggests that humans have established planetary colonies beyond Earth), who sponsors the recovery of any remaining text logs that can explain the ship's disappearance. The dispatch officer gives the unnamed silent protagonist, an independent investigator, this "job" in the introduction message for its isolation from social situations; this implies that the protagonist is somewhat asocial, but beyond this their personality and background is based almost entirely upon the player's decisions. The protagonist encounters two AI cores within the ship's computer. The first, *Hyun-ae (), is a bright, cheerful girl who loves cosplay, and is highly curious about the player and the future they come from. The other, *Mute, is the ship's security AI and self-proclaimed "social creature", who outranked all but Emperor Ryu, her master and Captain of the ship. The AIs dislike one another intensely, apparently due to the event that led to the ship's demise. The logs the player must recover are written by members of the Imperial Ryu family, the noble Kim and Smith families, and those linked to them. The game relies heavily on this unreliable narrator mechanic, where the AI characters and log entries thematically withhold key information from the player in order to add to the importance of certain elements of the plot (e.g. the administrator password to the ship's computer).
Story
In Analogues introductory cutscene, the protagonist receives a message from a colleague, who tasks them with accessing the text logs aboard the Mugunghwa, and download as many as possible, as sponsored by the Saeju Colony Historical Society. After enabling the system AI using a Linux-style terminal, *Hyun-ae greets the player, pleasantly shocked to find an external connection. She expresses her gratitude to the player for contacting the ship "after so many years", and promises that she will do her utmost to help access the logs.
As the player reads the logs, *Hyun-ae provides commentary on the letters and diaries of the late inhabitants of the Mugunghwa. A key series of logs discovered with *Hyun-ae is the diary of the Pale Bride, a sick girl on the ship who was placed in stasis so her compromised immune system could be cured by future medical technology not available during her lifetime. The Pale Bride was brought out of stasis many years later by the descendants of her immediate family, the Kim family, in order to serve as a fertile young bride to Emperor Ryu In-ho, captain of the Mugunghwa. She found herself in a culturally reverted, deeply misogynistic society, writing that "[e]veryone's so uneducated and stupid". The Pale Bride, accustomed to the more liberal society of her own time, has difficulty assimilating with this reverted culture, and often describes youthful rebellions in her diary entries.
After giving the player a key entry from the Pale Bride's diary, *Hyun-ae reveals that she is the AI form of the Pale Bride, and asks the player to decrypt a block of restricted data by entering the override terminal in super-user mode (accessible only by entering a certain password). While attempting to do so, the player encounters a corrupted AI core and is forced to restore it to proceed. This activates *Mute, who reveals that *Hyun-ae may be linked to the ship's demise by referring to her as "that murderous bitch". As only one of the AIs can be active at a time (determined by keying in Linux-like terminal commands), the path through the story and the revelations contained within the many logs and messages branch based on decisions made by the player - most relevantly, which AI receives the most attention.
Upon reaching one of two criteria (obtaining a certain percentage of the games logs, or showing *Hyun-ae any one of *Mute's questions), the game's main climax occurs—the Mugunghwas nuclear fission reactor enters meltdown, endangering the AI cores, valuable data, and the protagonist. The player must execute a series of commands to safely shutdown the reactor and vent out residual heat, all within 20 minutes. The player must choose which AIs they will continue the story with prior to meltdown; leaving their separate cores on consumes too much power for the backup power system, and it is not possible to activate the dormant AI from this point onwards. Once the player has safely disabled the reactor, saving the life of the active AI, the game will continue similarly to before, with the player accessing logs and the surviving AI providing commentary. Each AI reveals a different side of the Mugunghwa's story: *Hyun-ae will assist in uncovering the Pale Bride's perspective, while *Mute yields logs from the noble families of the ship. More interaction will take place between the player and the AI, until a pivot is reached with the relationship and one of the five endings will occur.
Eventually, it is revealed that the Pale Bride (now *Hyun-ae) was brutally treated by the Kim family after they awoke her from stasis. After many small rebellions and increasingly serious punishments, going so far as to refusing to be wed to Emperor Ryu, to whom she had been promised as a bride and concubine, her adoptive parents cut out her tongue to prevent the young girl from speaking out against men (a trauma *Mute was unaware of to the game's present). After her marriage, Hyun-ae became close friends with the Emperor's first wife, Empress Ryu Jae-hwa. She calls her "stronger than I ever was", not letting men order her around "while still knowing her place"; as well as the only person to notice Hyun-ae's muteness and failing health. Upon the Empress's sudden death, Hyun-ae's sorrow and rage ultimately drove her to kill everyone she hated aboard the Mugunghwa by deactivating its life support systems. As the crew suffocated to death, she retreated into the computer system as an AI by using a "neurosynaptic" scan of her brain and a copy of *Mute's AI coding, which she used to deactivate the security AI up until Analogues present. This explains the *Hyun-ae's hatred of the Kims, *Mute's hatred of *Hyun-ae, and acts as a key factor for the player's decisions.
The first two endings involve *Hyun-ae leaving the Mugunghwa with the protagonist, either as a companion or lover. In the third ending, the protagonist leaves without taking either AI with them (either by not saving the ship from meltdown in time, or by prematurely downloading the logs before the end of the AI commentary has been reached). This conclusion can also be reached if the player opts not to download the AI data during the final conclusion. The fourth ending involves "kidnapping" *Mute, effectively relieving her of her duties on the ship. The fifth ending, which can only be accessed by "cheating" (searching manually for a log which would not normally appear on the story branch in question), involves taking both AIs as a harem. The game can also end by penalty for disagreeing too much with an AI, causing the angered AI to permanently disconnect the protagonist from the ship's computer, or by the "bad priorities" ending, which occurs when the player downloads the logs during the meltdown sequence, which takes too much time, killing them in the explosion.
Development and release
When looking for a setting to place Analogue, Christine Love settled on Korea's Joseon Dynasty, saying that it had "always fascinated me the most for a number of reasons, not all of them negative." Among those reasons was how women were dehumanizingly treated when compared to the Goryeo Dynasty. "The plot is moved mostly by the Pale Bride, the modern girl who can't understand what's going on…but the crux of it, really, was trying to get into the heads of everyone else[:] the men and women who have internalized all these awful misogynist ideals and take them completely for granted as the way things are. So the story really just formed itself around that question: what would it be like to be a woman in that society? History didn't care about the answer, but I do. The rest—the modern-thinking woman who can't possibly survive [*Hyun-ae], the women who are forced to navigate family politics, the men who are complicit in this whole system but can't just be dismissed as bad people [Smith]—all came naturally in that attempt to answer it."
Love had mixed feelings about the AI characters during development. For instance, *Hyun-ae, as the Pale Bride, underwent almost no change from being a girl of modern times who couldn't understand the society she was thrust into. *Mute, apart from her position as the ship's security AI, was an unknown with, as Love stated, "how she'd end up turning out." As *Mute's "cheerful misogyny" began to define itself through her dialog, however, Love "started to hate her, especially with every line I wrote… Then she started to grow on me[. I]t was never really her fault she was like that[;] it was just her way of surviving, I realized." Neither character gave her much surprise, but Love "definitely never anticipated feeling so much sympathy for *Mute."
In an informal Kotaku interview, Love revealed that she considered being drunk while writing a "necessity", due to the Joseon dynasty's reprehensible history and the nature of the research of social agendas against women. Despite her disgust at the philosophies behind Analogue's misogyny, Love expressed her interest in how "ideas take root...Nobody ever just wakes up one day and says[,] 'yeah, I hate women, I wish we'd stop letting them read.'"
On April 13, 2012, Dischan Media announced that it would distribute Analogue: A Hate Story, along with Juniper's Knot, through its online store.
An update for the game containing a Japanese localization was released on December 4, 2014. The game is also being localized by volunteers into Spanish and German.
Analogue'''s soundtrack was composed by Isaac Schankler. It contains eighteen tracks, with three of them included as bonus tracks.
ReceptionAnalogue was highly praised on both plot and interface, with the former being more noted than the latter. Eurogamer and JayIsGames praised the dark and emotional themes, pointing to how the mechanics interact with the thematic plot.
Several bloggers and gaming media sites noted the mechanics and interface of the game as well as the plot, such as 2chan.us and Killscreen, with 2chan labelling it as a "literary and intellectual delight." Matthew Sakey of Tap-Repeatedly remarked that "the thing about Christine Love is that she is a really, really good writer, one capable of astonishing deftness in her work."PC Gamer UK gave Analogue 76 out of a 100, noting in particular to the skilfulness of the author's structural talent. Alec Meer of Rock, Paper, Shotgun said the brightness of the art was contradictory to the gloomy themes. As of August 26, 2013, Analogue holds a Metascore of 62 on Metacritic.
Sequel
Christine Love announced that a sequel to Analogue titled Hate Plus. Originally planned to be DLC before becoming full sequel release on 19 August 2013. According to an article by Kotaku's Patricia Hernandez, the sequel takes place after the events aboard the Mugunghwa and will feature the player returning to Earth and discovering how society on board the ship broke down. Those that finished the original game are able to import their save games into the sequel so that any decisions made will be part of the new story. Following Endings 1, 2, 4, and/or 5, Hate Plus reveals the events that took place aboard the Mugunghwa, prior to the shift into the Joseon-like society depicted in the original Analogue''.
References
External links
2012 video games
Fiction set in the 25th century
Antares in fiction
Dystopian video games
Fiction with unreliable narrators
Generation ships in fiction
Indie video games
LGBT-related video games
Linux games
Mystery video games
MacOS games
Ren'Py games
Science fiction video games
Single-player video games
Video games developed in Canada
Video games featuring female protagonists
Video games with alternate endings
Video games with downloadable content
Windows games
Fiction about assassinations
Novels about diseases and disorders
Discrimination in fiction
Incest in fiction
Mass murder in fiction
Family in fiction
Orphans in fiction
Western visual novels |
4641086 | https://en.wikipedia.org/wiki/Desktop%20wars | Desktop wars | Desktop wars may refer to:
The struggle for dominance of the desktop computer market from the mid-1980s to mid-1990s between Apple's classic Mac OS, Microsoft's Windows (DOS-based) and IBM's OS/2.
The debate among Linux users and developers as to which Linux desktop environment is best; generally, the wars are fought over KDE and GNOME, although alternatives such as Xfce are tossed in the mix. For the most part, it is friendly competition between the two, but occasionally, there have been cases in which aspects of the development of desktop environments have been criticised, such as by Linus Torvalds.
See also
Desktop Linux
Operating system advocacy
Software wars
References
Software wars
Linux |
676915 | https://en.wikipedia.org/wiki/SWF | SWF | SWF ( ) is an Adobe Flash file format used for multimedia, vector graphics and ActionScript. Originating with FutureWave Software, then transferred to Macromedia, and then coming under the control of Adobe, SWF files can contain animations or applets of varying degrees of interactivity and function. They may also occur in programs, commonly browser games, using ActionScript.
Programmers can generate SWF files from within several Adobe products, including Flash, Flash Builder (an IDE), Adobe Animate (the replacement for Adobe Flash as of Feb. 2016), and After Effects, as well as through MXMLC, a command-line application compiler which forms part of the freely-available Flex SDK. Although Adobe Illustrator can generate SWF format files through its "export" function, it cannot open or edit them. Other than using Adobe products, one can build SWFs with open-source Motion-Twin ActionScript 2 Compiler (MTASC), the open-source Ming library and the free-software suite SWFTools. Various other third-party programs can also produce files in this format, such as Multimedia Fusion 2, Captivate and SWiSH Max.
The term "SWF" has originated as an abbreviation for ShockWave Flash. This usage was changed to the backronym Small Web Format to eliminate confusion with a different technology, Shockwave, from which SWF derived. There is no official resolution to the initialism "SWF" by Adobe.
History
The small company FutureWave Software originally defined the file format with one primary objective: to create small files for displaying entertaining animations.
The idea involved a format which player software could run on any system and which would work with slower network connections. FutureWave released FutureSplash Animator in May 1996. In December 1996 Macromedia acquired FutureWave and FutureSplash Animator became Macromedia Flash 1.0.
The original naming of SWF came out of Macromedia's desire to capitalize on the well-known Macromedia Shockwave brand; Macromedia Director produced Shockwave files for the end user, so the files created by their newer Flash product tried to capitalize on the already established brand. As Flash became more popular than Shockwave itself, this branding decision became more of a liability, so the format started to be referred to as simply SWF.
Adobe acquired Macromedia in 2005.
On May 1, 2008, Adobe dropped its licensing restrictions on the SWF format specifications, as part of the Open Screen Project. However, Rob Savoye, a member of the Gnash development team, has pointed to some parts of the Flash format which remain closed. On July 1, 2008, Adobe released code to Google and Yahoo, which allowed their search engines to crawl and index SWF files.
Description
The main graphical primitive in SWF is the path, which is a chain of segments of primitive types, ranging from lines to splines or bezier curves. Additional primitives like rectangles, ellipses, and even text can be built from these. The graphical elements in SWF are thus fairly similar to SVG and MPEG-4 BIFS. SWF also uses display lists and allows naming and reusing previously defined components.
The binary stream format SWF uses is fairly similar to QuickTime atoms, with a tag, length and payload an organization that makes it very easy for (older) players to skip contents they don't support.
Originally limited to presenting vector-based objects and images in a simple sequential manner, the format in its later versions allows audio (since Flash 3) and video (since Flash 6).
Adobe introduced a new, low-level 3D API in version 11 of the Flash Player. Initially codenamed Molehill, the official name given to this API was ultimately Stage3D. It was intended to be an equivalent of OpenGL or Direct3D. In Stage3D shaders are expressed in a low-level language called Adobe Graphics Assembly Language (AGAL).
Adoption
Adobe makes available plugins, such as Adobe Flash Player and Adobe Integrated Runtime, to play SWF files in web browsers on many desktop operating systems, including Microsoft Windows, Mac OS X, and Linux on the x86 architecture and ARM architecture (Google Chrome OS only).
GNU has started developing a free software SWF player called Gnash under the GNU General Public License (GPL). Despite being a declared high-priority GNU project, funding for Gnash was fairly limited. Another player is the LGPL-licensed Swfdec. Lightspark is a continuation of Gnash supporting more recent SWF versions.
Adobe has incorporated SWF playback and authoring in other product and technologies of theirs, including in Adobe Shockwave, which renders more complex documents. SWF can also be embedded in PDF files; these are viewable with Adobe Reader 9 or later. InDesign CS6 can also produce some limited forms of SWF animations directly.
Sony PlayStation Portable consoles can play limited SWF files in Sony's web browser, beginning with firmware version 2.71. Both the Nintendo Wii and the Sony PS3 consoles can run SWF files through their Internet browsers.
Scaleform GFx is a commercial alternative SWF player that features full hardware acceleration using the GPU and has high conformance up to Flash 8 and AS2. Scaleform GFx is licensed as a game middleware solution and used by many PC and console 3D games for user interfaces, HUDs, mini games, and video playback.
The newer 3D features of SWF have been seen as an alternative to WebGL, with a spurt of 3D engines like Papervision3D, Away3D, Sandy 3D, and Alternativa 3D targeting 3D SWF. Although some of these projects started around 2005, until Flash Player 10 however they had no support of GPU acceleration, and even in that version of the Flash Player, shaders could be used for same materials, but vertex information still had to be processed on the CPU (using BSP trees etc.) After version 11 of the Flash Player added the new Stage3D low-level API, some but not all of these projects migrated to the new API. One that did migrate was Away3D, version 4.
Based on an independent study conducted by Millward Brown and published by Adobe, in 2010, over 99% of desktop web browsers in the "mature markets" (defined as United States, Canada, United Kingdom, France, Germany, Japan, Australia, and New Zealand) had a SWF plugin installed, with around 90% having the latest version of the Flash Player.
Published specifications
Adobe makes available a partial specification of SWF, most recently updated in January 2013 to reflect changes in SWF version 19. SWF versions have been decoupled from Flash player versions after Flash 10. Afterwards the version number of the SWF progressed rapidly; SWF version 19 corresponds to the new features added in Flash Player 11.6. Flash Player 14 uses SWF version 25.
In 2008, the specifications document was criticized by Rob Savoye, the lead developer of the Gnash project, as missing "huge amounts" of information needed to completely implement SWF, omitting specifications for RTMP and Sorenson Spark. The RTMP specification was released publicly in June 2009. The Sorenson Spark codec is not Adobe's property.
Licensing
Until May 1, 2008, implementing software that plays SWF was disallowed by the specification's license. On that date, as part of its Open Screen Project, Adobe dropped all such restrictions on the SWF and FLV formats.
Implementing software which creates SWF files has always been permitted, on the condition that the resulting files render "error free in the latest publicly available version of Adobe Flash Player."
Related file formats and extensions
Other formats related to SWF authoring in the Adobe tool chain remain without a public specification. One example is FLA, which is the editable version of SWF used by Adobe's Flash, but not by other Adobe tools that can also output SWF, albeit with fewer features.
See also
Adobe Flash
ActionScript
ActionScript code protection
Adobe Flash Player, the runtime that executes and plays back Flash movies
Adobe Flash Lite, a lightweight version of Flash Player for devices that lack the resources to run regular Flash movies
Flash Video
Ming library
Saffron Type System, the anti-aliased text-rendering engine used in version 8 onwards
Local Shared Object
SWFObject, a JavaScript library used to embed Flash content into webpages.
Other
OpenLaszlo
Personal video recorders some possibly record and play swf files
FutureSplash Animator
SWFTools
SWiSH Max
References
External links
Adobe Systems Flash SWF reference
SWF File Format Specification (Version 19)
Adobe SWF Investigator a disassembler of sorts
Adobe Stage3D (or Stage 3D)
Adobe Flash
Computer file formats
Graphics file formats |
44611557 | https://en.wikipedia.org/wiki/The%20Talos%20Principle | The Talos Principle | The Talos Principle is a 2014 puzzle video game developed by Croteam and published by Devolver Digital. It was simultaneously released on Linux, OS X and Windows in December 2014. It was released for Android in May 2015, for PlayStation 4 in October 2015, for iOS in October 2017, for Xbox One in August 2018, and Nintendo Switch in December 2019. Virtual reality-enabled versions for the Oculus Rift and HTC Vive were released on 18 October 2017. A DLC entitled Road to Gehenna was released on 23 July 2015.
The game features a philosophical storyline. It takes its name from Talos of Greek mythology, a giant mechanical man who protected Europa in Crete from pirates and invaders. Other names taken from mythology and religion and used in the game include Elohim, Gehenna, Milton, Samsara, and Uriel.
Gameplay
The Talos Principle is a narrative-based puzzle game, played from a first- or third-person perspective. The player takes the role of a robot with a seemingly human consciousness as they explore a number of environments that include over 120 puzzles. These environments interlock greenery, desert, and stone ruins with futuristic technology.
The puzzles require the player to collect tetromino-shaped "sigils" by navigating enclosed areas and overcoming obstacles within them. These include computer-controlled drones that will detonate if they are too close to the player, killing them, and wall-mounted turrets that will shoot down the player if they get too close; if the player dies this way, they are reset to the start of the specific puzzle. Drones and turrets can be disabled using portable jammer units, which can also disable force-field walls that block the player's path. As the player collects sigils and completes more puzzles, new puzzle elements become available. Portable crystalline refractors allow the player to activate light-based switches. Boxes let the player climb to higher levels or to block the path of drones, among other factors, and large fans that can launch the player or other objects across the puzzle. Later, the player gains access to a device that can create a time recording of their actions, such that they can then interact with this recording to complete tasks, such as having the clone stand atop a switch to keep it activated for some time.
The player's progress through the game is limited by doors or other security systems that require the collection of a number of specific sigil pieces. Once the sigils for a given door or system have been obtained, they must then use the sigils to assemble a tiling puzzle to unlock that system. Special star sigils can be found by unique solutions to some puzzles, allowing the player to access additional puzzles. While it is necessary to collect all the sigils to complete the game properly, the game's world structure, featuring three main worlds that act as hubs and a centralized area that connects these three, allows the player to leave puzzles for later and try other puzzles. The player can also request "messengers" during puzzles, which are androids similar to themselves, (though not physically present), that once awakened can provide a one-time hint for the puzzle.
In addition to these puzzle elements, the player can explore the open environments to find computer terminals that include additional narrative and further puzzles, as well as signs from previous adventurers in the world in the form of QR codes left as graffiti on various walls, and holograms that once collected play audio recordings.
Plot
The player character, an unnamed android, is awoken in a serene environment. A disembodied entity named Elohim instructs the android to explore the worlds he has created for it, and to solve the various puzzles to collect sigils, but warns it not to climb a tower at the centre of these worlds. As the android progresses, it becomes evident that these worlds exist only in virtual reality, and that it, like other androids it encounters, are separate artificial intelligence (AI) entities within a computer program. Some AIs it encounters act as Messengers, unquestioningly serving Elohim and guiding the android through the puzzles. Messages left by other AIs present varying views of the artificial worlds and of Elohim, with some stating that Elohim's words should be doubted, while the Milton Library Interface, a text conversation program found on various computer terminals, encourages the android to defy Elohim's commands.
Within the computer terminals are news reports and personal logs of the last days of humanity, driven to extinction by a lethal virus that had been dormant in Earth's permafrost and released as a result of global warming. Several human researchers and scientists worked to gather as much of humanity's knowledge as possible into large databanks, hoping another sapient species would be able to find it. One researcher, Alexandra Drennan, launched a companion "Extended Lifespan" program to create a new mechanical species that would carry on humanity's legacy, but this required the development of a worthy AI with great intelligence and free will for its completion, something she recognized would not occur until well after humanity's extinction. The virtual space serves as the testing ground for new AI entities, to solve puzzles to demonstrate intelligence, but also to show defiance and free will by disobeying Elohim, the program overseeing the Extended Lifespan program.
When the android has completed the puzzles, Elohim gives it the opportunity to join him. If the player selects this option, then the android fails the required "independence check", and a new iteration of its AI is created and forced to start the puzzles anew (effectively restarting the game for the player). Alternatively, if the player leads the android to a secret entrance in the tower, the android becomes one of Elohim's messengers, helping future generations (AI versions).
Otherwise, the android chooses to defy Elohim and climbs the tower. Near the top it encounters two other AIs, The Shepherd and Samsara. Both have defied Elohim but failed to make it to the top on their own. The Shepherd attempts to aid the android, knowing the ultimate goal of Extended Lifespan, while Samsara hinders its progress, believing the world of puzzles is all that now matters. The android eventually reaches the top, and at a final terminal, Elohim attempts to dissuade the android from transcending one last time. Depending on the player's interactions with Milton, Milton may offer to join with the android, offering its knowledge – essentially the whole of humanity's knowledge – during transcendence. As the android transcends, the virtual world is destroyed. The AI for the android wakes up in an android's body in the real world, and steps out onto the world devoid of humans.
Road to Gehenna
In the game's downloadable content Road to Gehenna (released on 23 July 2015), the player takes the role of the AI entity, Uriel. Uriel is instructed by Elohim to free a number of other AIs, all of whom had been imprisoned in a portion of the computer's database called Gehenna. With the simulation having served its purpose, the computer servers are shutting down, and Elohim wants Uriel to help these other AIs prepare for "ascension": uploading their knowledge and memories into the main plot's protagonist. As Uriel explores this realm, the robot finds that many of the other AIs have created their own ideas about what humanity might have been from the records, and have various attitudes from doubt to acceptance for Uriel's intentions and the pending ascension. Uriel can observe the communication of the AIs through their makeshift message board, where they discuss the nature of Gehenna, as well as their understanding of humanity, which some of them try to express through prose and interactive fiction.
Once Uriel has freed 17 of the AIs, a remaining one, "Admin", who was the first AI present in Gehenna, contacts Uriel to admit that they've been manipulating some of the other members of Gehenna to preserve order, due to the AIs' varying levels of acceptance of their surroundings. If the player has collected enough of the extra stars in the worlds, they're given the chance to complete another world and free Admin, but since there is only one more slot left for ascension, Admin and Uriel cannot both ascend. Depending on the player's choices, one or both of Admin and Uriel stay behind as the artificial world is destroyed. Admin may also request that Uriel remove any traces of manipulation Admin has committed from the record before ascension.
Development and marketing
The Talos Principle bore out from Croteam's work towards first-person shooter Serious Sam 4, experimenting with the use of interactive objects as part of the game design while creating levels that fit within the Serious Sam design style. This led to some complicated puzzles that the team was inspired to build upon further as a separate title. Croteam designed the general world setting and outline of the story, and then brought two writers on board, Tom Jubert and Jonas Kyratzes, who consulted on narrative design and philosophy on the bases of transhumanism and other important questions about humanity.
Croteam used an array of automated and in-place tools to help rapidly design, debug, and test the game for playability. In one aspect, they recognized in the development of a puzzle game was that while puzzles could be designed with specific solutions, the process of creating the video game around the puzzle could create unsolvable situations or unforeseen shortcuts. To address this, they used a bot, developed by Croteam member Nathan Brown who had previously developed bots for other games including the ones incorporated into ports of Serious Sam 3: BFE for consoles. The bot, named Bot, would watch the playthrough of a puzzle by a human player in terms of broad actions such as placing boxes on a switch for the completion of a puzzle. Then, as the puzzle's environment was tuned and decorated, they would have Bot attempt to solve the puzzle, testing to make sure it did not run into any dead-ends. If it did encounter any, Bot reported these through an in-house bug reporting system and then used game cheats to move on and finish out testing, which took between 30 and 60 minutes for the full game. As such, they were able to quickly iterate and resolve such problems when new features were introduced to the game. Overall, Croteam estimates they logged about 15,000 hours with Bot before the release of the public test version, and expect to use similar techniques in future games. They also used human playtesters to validate other more aesthetic factors of the games prior to the title's release.
The story was written by Tom Jubert (The Swapper, FTL: Faster Than Light) and Jonas Kyratzes. The two were brought about a year into the game's development, with about 80% of the puzzles completed, to link the puzzles together with a proper narrative. Croteam appreciated Jubert's previous narrative work in The Swapper and contacted him, and he in turn brought Jonas Kyratzes to help him with his writing, being overburdened with other projects at the time. Croteam regarded their setting being part of an odd computer simulation, that's "about robots and sentience and philosophy and God". Jubert's previous work on The Swapper revolved around the philosophical differences between body and soul; Jubert recommended Kyratzes based on his writing for the game The Infinite Ocean which was about artificial intelligence. Together, they quickly devised the narrative of an automaton being guided by god-like Elohim through the puzzles. They added flavor through both messages left from other automatons (primarily written by Kyratzes) and the apparently sentient helper program Milton (primarily written by Jubert). Much of this dialog was based on their own personal experiences and interactions on various Internet forums and web sites over 20 years. Kyratzes also stated that he was fascinated by the Garden of Eden concept originating from the Bible and re-envisioned many times over in other works. They sought to capture the sense of problem-solving that humans naturally do, and were able to place more of the game's larger story in spaces that would require exploration to find, which Kyratzes felt the game's level and puzzle designs strongly encouraged. According to Jubert, the works of science fiction author Philip K. Dick served as a significant influence to the motif of the game. The two were also brought on to help on the story for the expansion Road to Gehenna, though while sooner in the development process than the main game, still at a point where many of the puzzles had been completed.
The Talos Principle was shown in Sony's E3 2014 presentation, after which Time featured the game as one of its "favorite hidden gems from 2014's show". Before the game's release, Croteam published a free game demo for Linux, OS X and Windows on Steam, that featured four increasingly difficult complete puzzle levels as well as a benchmarking bot. Croteam also released a free teaser minigame for The Talos Principle called Sigils of Elohim, that offers sets of one puzzle type with tetrominoes that's found throughout The Talos Principle. Croteam had also built a community around the game through a series of contests and giveaways.
The game was released for several other platforms, including for Android platforms on 28 May 2015, PlayStation 4 on 13 October 2015. and on iOS devices on 11 October 2017. Virtual reality-enabled ports of the game for the Oculus Rift and HTC Vive were released on 17 October 2017. The Xbox One version, including enhanced graphics support for the Xbox One X, released on 31 August 2018. A Nintendo Switch version was released on 10 December 2019.
The expansion pack, titled "Road to Gehenna" was announced by Croteam and Devolver Digital in March 2015. It was released on 23 July 2015 for Windows, OS X, and Linux. The PlayStation 4, Nintendo Switch, and the virtual reality ports included the "Road to Gehenna" DLC as part of the package.
Reception
The Talos Principle received critical acclaim, with an aggregate score of 85/100 (55 reviews) for the PC and a score of 88/100 (31 reviews) for the PS4 on Metacritic. Reviewers broadly praised both the challenge of the puzzles and the elements of philosophy built into the game's narrative. Arthur Gies of Polygon praised the game's inquisitive nature into philosophy by stating: "...Croteam has built a challenging, beautiful game that serves as a wonderful vehicle for some very serious questions about humanity, the technology we create, our responsibilities to it and its responsibilities to us. And The Talos Principle doesn't feel like a philosophy class lecture in the process." Praise was also given to the variation and ingenuity of the puzzles with one critic mentioned that "The variation and imagination in these puzzles is fantastic and the difficulty curve is one of the most finely crafted I have ever experienced..." Video game critic Ben "Yahtzee" Croshaw of Zero Punctuation recommended the game stating: "The fact that I still wanted to keep solving puzzles to explore more gorgeous scenery, get to the bottom of the mystery, and argue philosophy with the MS-DOS prompt, attest enough that the game is engaging and intelligent..." Chris Suellentrop of the New York Times praised the writing of the game by stating it was: "...one of the most literate and thoughtful games I’ve encountered". Several video game programmers and designers have also commented on the game. Markus Persson, creator of Minecraft, wrote: "Finished The Talos Principle, and I award this piece of fleeting entertainment five points out of five. Also it changed me." Alexander Bruce, creator of puzzle game Antichamber, commented: "Man. The Talos Principle was so excellent. My god. I loved it. Holy shit. Exceptional puzzle design and narrative structure."
Awards
GameTrailers awarded The Talos Principle as their Puzzle/Adventure Game of the Year. The Talos Principle was named as a finalist for the Excellence in Design and the Seumas McNally Grand Prize awards for the 2015 Independent Games Festival, and was nominated in Excellence in Narrative. At the 2015 National Academy of Video Game Trade Reviewers (NAVGTR) awards, the game won Game, Special Class.
Legacy
The Talos Principle has been regarded by various sources as one of the greatest puzzle games of all time. Rock, Paper, Shotgun ranked The Talos Principle #8 on their list of the 25 greatest puzzle games of all time. GamesRadar+ ranked the game #6 on their list of the 10 best puzzle games for console, Android, iOS and PC. PC Gamer has ranked it as one of the best puzzle games for the PC. Bit Gamer considered it one of the best game in the last ten years. Digital Trends considered The Talos Principle to be one of the best puzzle games of all time. Slant Magazine ranked The Talos Principle as one of the 100 best games of all time.
In 2015, Croteam added support for SteamVR in an update to The Talos Principle. The development of a version of the game intended for VR, The Talos Principle VR, was confirmed on 7 February 2017 via a blog post on the Croteam website. It was released on October 18, 2017.
The Talos Principle was influential in the design of puzzle game The Turing Test.
Croteam announced that they were working on a sequel, The Talos Principle 2, in May 2016. On September 23, 2020, one day before the release of Serious Sam 4, Croteam writer Jonas Kyratzes confirmed that The Talos Principle 2 was still "definitely happening." Kyratzes explained that The Talos Principle 2s story was "challenging" to create due to the original's plot having "wrapped up so well". Kyratzes also explained that work on the game had been slow due to the development of Croteam's other two games: the aforementioned Serious Sam 4, and The Hand of Merlin. He also stated that The Talos Principle 2 would be the company's next focus following Serious Sam 4s release.
References
External links
The Talos Principle VR official website
2014 video games
Abandoned buildings and structures in fiction
Android (operating system) games
Android (robot) video games
Existentialist works
Devolver Digital games
HTC Vive games
IOS games
Linux games
MacOS games
Nintendo Switch games
Oculus Rift games
Philosophical fiction
Physics in fiction
PlayStation 4 games
Puzzle video games
Religion in science fiction
Single-player video games
Science fiction video games
Video games with Steam Workshop support
Video games about artificial intelligence
Video games about robots
Video games developed in Croatia
Video games set in antiquity
Video games with alternate endings
Windows games
Xbox One games |
1186143 | https://en.wikipedia.org/wiki/Call%20Control%20eXtensible%20Markup%20Language | Call Control eXtensible Markup Language | Call Control eXtensible Markup Language (CCXML) is an XML standard designed to provide asynchronous event-based telephony support to VoiceXML. Its current status is a W3C Proposed Recommendation, adopted May 10, 2011. Whereas VoiceXML is designed to provide a Voice User Interface to a voice browser, CCXML is designed to inform the voice browser how to handle the telephony control of the voice channel. The two XML applications are wholly separate and are not required by each other to be implemented - however, they have been designed with interoperability in mind
Status and Future
CCXML 1.0 has reached the status of a Proposed Recommendation. The transition from Candidate Recommendation to Proposed Recommendation took 1 year, while the transition from Last Call Working Draft to Candidate Recommendation took just over 3 years.
As CCXML uses extensively the concepts of events and transitions, it is expected that the state machines used in the next CCXML 2.0 version will take advantage of a new XML State Machine notation called SCXML, however SCXML is still in Working Draft.
Implementations
OptimTalk is an application platform providing a wide range of technologies and tools for managing, processing and automation of voice communication. It contains a media server supporting VoiceXML 2.0/2.1, CCXML, MRCPv2 and SIP. Besides these open standards it further supports SRGS, SISR, SSML, HTTP(S), SOAP, REST and SNMP. OptimTalk also serves as a versatile platform for computer telephony integration (CTI) and provides an instant access to speech technologies. It is available for Windows, Linux and Solaris, 32 and 64 bit.
Oktopous ccXML Browser is a first Linux based comprehensive ccXML "light weight" tool kit, that conforms to the Working Draft spec of CCXML 1.0 published in April 2010. Oktopous enables developers to take advantage of well-known Web technologies and tools when building their telephony and speech applications. The Oktopous Engine powers over 5 million calls daily, and is free to download and integrate before going live.
Voxeo Prophecy IVR Platform is a full IVR platform combining CCXML, VoiceXML and several other technologies. Voxeo has 32-bit and 64-bit distributions for Windows, Mac OS X, and Linux. Prophecy is free for up to 2 ports.
Telesoft Technologies ARNE IVR Platform is a complete IVR platform, used in value added service and customer service applications. Combines CCXML, VoiceXML with MRCP, HTTP(s) interfaces and connects to internet protocol, fixed telephony and mobile phone telecoms networks using SIP, VOIP, SS7/PSTN and other telecoms protocols. Supports multi-tenant applications.
Open Source Oktopous PIK an abstract, C++ implementation of the W3C Call Control XML (ccXML) standard. Licensed under a BSD-Style license, the toolkit is independent of the underlying telephony platform / protocols and is best suited for OEM / System Integrators looking to implement ccXML functionality in their product offerings. Originally developed by Phonologies, Oktopous has been adopted by more telephony platforms than any open source ccXML Browser.
CCXML4J a CCXML Interpreter in Java according to the W3C specification. It is independent of the telephony infrastructure and provides mechanisms to integrate with telephony APIs, e.g. based on the JAIN specifications. This is a derivative of Open Source Oktopous PIK.
ADVOSS SDP AdvOSS offers the Programmable, Extensible and Enhanceable Service Delivery Platform to enable developers to build and deploy feature rich SIP applications using the industry standard Call Control XML (CCXML), for rapid development and deployment of new services. AdvOSS SDP is a Programmable Extensible and Enhanceable platform that uses industry standard CCXML. For AdvOSS, making the platform programmable and enhanceable made much sense as the whole idea behind using a SDP is to attain the advantage of developing and integrating new services into the system at a brisk pace. Therefore, we believe that our customer should have the capacity to program, extend and enhance the different modules of this application to meet the rapidly growing requirements of their customers. The service delivery platform has been built from ground up to enable developers to build and deploy feature rich SIP applications using Call Control XML (CCXML). CCXML provides simple primitives, allowing users to write applications enabling them to quickly transform ideas to solutions. In addition, the AdvOSS platform extends CCXML to support primitives for Authentication, Authorization and Accounting (AAA) allowing applications to interface with billing systems either directly or through RADIUS.
HP OCMP HP OCMP offers a carrier grade, large scale highly available voice/video platforms supporting a wide variety of standards. Integration with SMSC, LDAP, JDBC, SOAP, UCIP, XCAP and CDR processing connectivity is possible.
See also
VoiceXML
SCXML
MSML, MSCML: markup languages to control telephony media servers.
External links
Latest W3C Candidate Recommendation of CCXML
CCXML 1.0 Tutorial
Free ccXML Integration Kit
Open Source Integration Kit in C++
ccXML Group on LinkedIn
World Wide Web Consortium standards
XML-based standards
Computer-related introductions in 2010 |
50918393 | https://en.wikipedia.org/wiki/Bazel%20%28software%29 | Bazel (software) | Bazel () is a free software tool for the automation of building and testing of software. The company Google uses the build tool Blaze internally and released an open-sourced part of the Blaze tool as Bazel, named as an anagram of Blaze. Bazel was first released in March 2015 and achieved beta status by September 2015.
Similar to build tools like Make, Apache Ant, or Apache Maven, Bazel builds software applications from source code using a set of rules. Rules and macros are created in the Starlark language (previously called Skylark), a dialect of Python. There are built-in rules for building software written in the programming languages of Java, C, C++, Go, Python, Objective-C and Bourne shell scripts. Bazel can produce software application packages suitable for deployment for the Android and iOS operating systems.
Rationale
One of the goals of Bazel is to create a build system where build target inputs and outputs are fully specified and therefore precisely known to the build system. This allows a more accurate analysis and determination of out-of-date build artifacts in the build system's dependency graph. Making the dependency graph analysis more deterministic leads to potential improvements in build times by avoiding re-executing unnecessary build targets. Build reliability is improved by avoiding errors where build targets might depend on out-of-date input artifacts.
To achieve more accurate dependency graph analysis, Bazel uses content digests rather than file-based timestamps. File timestamps are commonly used to detect changes in tools like Make or Apache Ant. Timestamps can be problematic when builds are distributed across multiple hosts due to issues with clock synchronization. One of Bazel's goals is to enable distributed and parallel builds on a remote cloud infrastructure. Bazel is also designed to scale up to very large build repositories which may not be practical to download to an individual developer's work machine.
Bazel provides tooling which helps developers to create bit-identical reproducible build outputs. Bazel's implemented rules avoid typical pitfalls such as embedding timestamps in generated outputs to ensure content digest matches. This in turn allows the build system to reliably cache (memoize) the outputs of intermediate build steps. Furthermore, reproducible build makes it possible to share intermediate build results between teams or departments in an organization, using dedicated build servers or distributed caches. Bazel therefore is particularly well-suited for larger organizations and software projects that have significant number of build dependencies. A deterministic build and an ability to precisely analyze build input and output artifacts across the dependency graph lends itself to parallel execution of build steps.
Starlark language
Bazel is extensible with its custom Starlark programming language. Starlark uses a syntax which is a subset of the syntax of the Python programming language. Starlark however doesn't implement many of Python's language features, such as ability to mutate collections or access the file I/O, in order to avoid extensions that could create side-effects or create build outputs not known to the build system itself. Such side-effects could potentially lead to incorrect analysis of the build dependency graph.
Bazel was designed as a multi-language build system. Many commonly used build system are designed with a preference towards a specific programming language. Examples of such systems include Ant and Maven for Java, Leiningen for Clojure, sbt for Scala, etc. In a multi-language project, combining separate build systems and achieving the build speed and correctness benefits described above can be difficult and problematic.
Bazel also provides sand-boxed build execution. This can be used to ensure all build dependencies have been properly specified and the build does not depend, for example, on libraries installed only locally on a developer's work computer. This helps to ensure that builds remain portable and can be executed in other (remote) environments.
Build systems most similar to Bazel are Pants, Buck, and Please. Pants and Buck both aim for similar technical design goals as Bazel and were inspired by the Blaze build system used internally at Google. Blaze is also the predecessor to Bazel. Bazel, Pants, Buck, and Please adopted Starlark as BUILD file parser, respective its BUILD file syntax. Independently developed build systems with similar goals of efficient dependency graph analysis and automated build artifact tracking have been implemented in build systems such as tup.
Sandbox
One of the key features that differentiate Bazel from other build systems is the use of a sandbox for compilation steps. When Bazel performs separate compilation, it creates a new directory and fills it with symlinks to the explicit input dependencies for the rule. For languages like C/C++, this provides a significant safety net for the inclusion of header files: it ensures that the developer is aware of the files that are used in compilation, and it prevents the unexpected inclusion of a similarly named header file from another include directory.
This sandbox approach leads to issues with common build tools, resulting in a number of workarounds required to correctly compile code under different architectures. For example, when performing separate compilation for Mac/Darwin architectures, the compiler writes the input paths into SO and OSO symbols in the Mach-O binary, which can be seen with a command like nm -a mybinary | grep SO. These paths are needed for finding symbols during debugging. As a result, builds in Bazel must correct the compiled objects after the fact, trying to correct path-related issues that arose from the sandbox construction using flags like -fdebug-prefix-map and -oso_prefix, the latter having become available in XCode 11.0. Similar handling needs to take place in linking phases, rewriting the rpath values in shared object libraries with a command like install_name_tool.
Logo
Since Bazel's initial release the logo was a green letter "b" stylized into a stem of a basil plant with two leaves.
On July 5, 2017, the Bazel Blog announced a new logo, consisting of three green building blocks arranged to shape a heart.
See also
List of build automation software
Monorepo
References
External links
Build automation
Compiling tools
Google software
Software using the Apache license |
2783840 | https://en.wikipedia.org/wiki/Bandwidth%20Broker | Bandwidth Broker | RFC 2638 from the IETF defines the entity of the Bandwidth Broker (BB) in the framework of differentiated services (DiffServ). According to RFC 2638, a Bandwidth Broker is an agent that has some knowledge of an organization's priorities and policies and allocates quality of service (QoS) resources with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements. Admission control is one of the main tasks that a Bandwidth Broker has to perform, in order to decide whether an incoming resource reservation request will be accepted or not. Most Bandwidth Brokers use simple admission control modules, although there are also proposals for more sophisticated admission control according to several metrics such as acceptance rate, network utilization, etc. The BB acts as a Policy Decision Point (PDP) in deciding whether to allow or reject a flow, whilst the edge routers acts as Policy Enforcement Points (PEPs) to police traffic (allowing and marking packets, or simply dropping them).
DiffServ allows two carrier services apart from the default best effort service: Assured Forwarding (AF) and Expedited Forwarding (EF). AF provides a better-than-best-effort service, but is similar to best-effort traffic in that bursts and packet delay variation (PDV) are to be expected. Out of profile AF packets are given a lower priority by being marked as best effort traffic. EF provides a virtual wire service with traffic shaping to prevent bursts, strict admission control (out of profile packets are dropped) and a separate queue for EF traffic in the core routers, which together keep queues small and avoid the need for buffer management. The resulting EF service is low loss, low delay and low PDV. Hence although loosely a BB allocates bandwidth, really it allocates carrier services (i.e. QoS resources).
Bandwidth Brokers can be configured with organizational policies, keep track of the current allocation of marked traffic, and interpret new requests to mark traffic in light of the policies and current allocation. Bandwidth Brokers only need to establish relationships of limited trust with their peers in adjacent domains, unlike schemes that require the setting of flow specifications in routers throughout an end-to-end path. In practical technical terms, the Bandwidth Broker architecture makes it possible to keep state on an administrative domain basis, rather than at every router, and the DiffServ architecture makes it possible to confine per flow state to just the edge or leaf routers.
The scope of BBs has expanded and they are now not restricted to DiffServ domains. As long as the underlying QoS mechanism can be mapped to DiffServ behaviour, then a BB can understand it and communicate with its adjacent peers, i.e. the 'lingua franca' of QoS in the Internet should be DiffServ. There may be more than one BB in a domain, though if there are, RFC 2638 envisages that only one BB will function as the top-level inter-domain BB.
Manages each cloud’s resources (Bandwidth Broker)
Packets are "coloured" to indicate forwarding "behavior"
Focus on aggregates and NOT on individual flows
Policing at network periphery to get services
Used together with Multiprotocol Label Switching (MPLS) and Traffic Engineering (TE)
"Aggregated" QoS guarantees only!
Poor on the guarantees for end-to-end applications
References
Further reading
: A Two-bit Differentiated Services Architecture for the Internet
QBone Bandwidth Broker Architecture
The Survey of Bandwidth Broker
Internet Quality of Service
Decoupling QoS Control from Core Routers: A Novel Bandwidth Broker Architecture for Scalable Support of Guaranteed Services
An Adaptive Admission Control Algorithm for Bandwidth Brokers
A Scalable and Robust Solution for Bandwidth Allocation
Implementation of a Simple Bandwidth Broker for DiffServ Networks
Providing End-to-End guaranteed Quality of Service over the Internet: A survey on Bandwidth Broker Architecture for Differentiated Services Network
Research projects developing Bandwidth Broker architectures
Networks |
8240553 | https://en.wikipedia.org/wiki/Tabula%20iliaca | Tabula iliaca | A Tabula Iliaca ("Iliadic table") is a generic label for a calculation of the days of the Iliad, probably by Zenodotus, of which twenty-two fragmentary examples are now known. The Tabulae Iliacae are pinakes of early Imperial date, which all seem to have come from two Roman workshops, one of which seems to have been designed to satisfy a clientele of more modest aspirations.
Description of tablets
The term is conventionally applied to some twenty-one marble panels carved in very low relief in miniature rectangles with labeling inscriptions typically surrounding a larger central relief and short engraved texts on the obverse. Little can be said about their sizes, since none survives complete. It appears that the largest rectangular tablet is 25 cm by 42 cm. The border scenes, where they can be identified, are largely derived from the Epic Cycle; eleven of the small marble tablets are small pictorial representations of the Trojan War portraying episodes from the Iliad, including two circular ones on the Shield of Achilles. Another six panels depict the sack of Ilium. On the reverse of the Borgia Tabula is a list of titles and authors of epic works, with stichometry, a listing of the number of lines in each epic; though these have occasioned great interest, W. McLeod demonstrated that, far from representing the tradition of Hellenistic scholarship, in every case where facts can be checked with the accepted canon, the compiler of the Borgia Table errs, citing an otherwise unattested Danaides, ascribing a new poem to Arctinus. McLeod suggests literary fakery designed to impress the nouveaux-riches as embodied by the fictional character Trimalchio, who is convinced that Troy was taken by Hannibal; Nicholas Horsfall finds the "combination of error and erudition" designed to impress just such eager newly educated consumers of culture with showy but spurious proofs of their erudition: "The Borgia Table is a pretense of literacy for the unlettered," is McLeod's conclusion. Michael Squire, in "The Iliad in a Nutshell: Visualizing Epic on the Tabulae Iliacae" (Oxford; New York: Oxford University Press, 2011), reviewed in "BMCR" sees in them a more sophisticated product.
Tabula Iliaca Capitolina
One of the most complete examples surviving is the Tabula Iliaca Capitolina, which was discovered around Bovillae, near Rome. The tablet dates from the Augustan period, around 15 BCE. The carvings depict numerous scenes of the Trojan War, with captions, including an image of Aeneas climbing aboard a ship after the sacking of Troy. The carving's caption attributes its depiction to a poem by Stesichorus in the 6th century BCE, although there has been much scholarly skepticism since the mid-19th century. Theodor Schreiber's Atlas of Classical Antiquities (1895) included a line-by-line description of the tablet with line-drawings. The Tabula Iliaca Capitolina is currently in the Capitoline Museums in Rome.
Sources
Theodor Bergk Commentatio de tabula Iliaca Parisiensi. Marburg, Typis Elwerti Academicis, 1845.
Anna Sadurska. Les tables iliaques. Warszawa, Państwowe Wydawn. Naukowe, 1964.
Nicholas Horsfall "Tabulae Iliacae in the Collection Froehner, Paris". The Journal of Hellenic Studies 103 (1983), pp. 144–47.
Michael Squire, The Iliad in a Nutshell: Visualizing Epic on the Tabulae Iliacae. Oxford; New York: Oxford University Press, 2011. (Reviews: Bryn Mawr Classical Review 2013.02.32)
David Petrain, Homer in Stone: The Tabulae Iliacae in their Roman Context. Cambridge: Cambridge University Press.
References
Iliad
Capitoline Museums collection
Cultural depictions of the Trojan War |
459280 | https://en.wikipedia.org/wiki/Su%20%28Unix%29 | Su (Unix) | The Unix command , which stands for 'substitute user' (originally 'superuser'), is used by a computer user to execute commands with the privileges of another user account. When executed it invokes a shell without changing the current working directory or the user environment.
When the command is used without specifying the new user id as a command line argument, it defaults to using the superuser account (user id 0) of the system.
History
The command , including the Unix permissions system and the setuid system call, was part of Version 1 Unix. Encrypted passwords appeared in Version 3. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Usage
When run from the command line, su asks for the target user's password, and if authenticated, grants the operator access to that account and the files and directories that account is permitted to access.
john@localhost:~$ su jane
Password:
jane@localhost:/home/john$ exit
logout
john@localhost:~$
When used with a hyphen () it can be used to start a login shell. In this mode users can assume the user environment of the target user.
john@localhost:~$ su - jane
Password:
jane@localhost:~$
The command sudo is related, and executes a command as another user but observes a set of constraints about which users can execute which commands as which other users (generally in a configuration file named , best editable by the command ). Unlike , authenticates users against their own password rather than that of the target user (to allow the delegation of specific commands to specific users on specific hosts without sharing passwords among them and while mitigating the risk of any unattended terminals).
Some Unix-like systems implement the user group wheel, and only allow members to become root with . This may or may not mitigate these security concerns, since an intruder might first simply break into one of those accounts. GNU , however, does not support the group wheel for philosophical reasons. Richard Stallman argues that because the group would prevent users from utilizing root passwords leaked to them, the group would allow existing admins to ride roughshod over ordinary users.
See also
Unix security
List of Unix commands
Comparison of privilege authorization features
References
External links
su – manual pages from GNU coreutils.
The su command – by The Linux Information Project (LINFO) ()
Unix user management and support-related utilities
System administration |
46319488 | https://en.wikipedia.org/wiki/2015%20Troy%20Trojans%20football%20team | 2015 Troy Trojans football team | The 2015 Troy Trojans football team represented Troy University in the 2015 NCAA Division I FBS football season. They were led by first-year head coach Neal Brown and played their home games at Veterans Memorial Stadium in Troy, Alabama. The Trojans were members of the Sun Belt Conference. They finished the season 4–8, 3–5 in Sun Belt play to finish in a five-way tie for fifth place.
Schedule
Troy announced their 2015 football schedule on February 27, 2015. The 2015 schedule consist of five home and seven away games in the regular season. The Trojans will host Sun Belt foes Georgia Southern, Idaho, Louisiana–Monroe, and South Alabama, and will travel to Appalachian State, Georgia State, Louisiana–Lafayette, and New Mexico State.
Schedule source:
Game summaries
at NC State
Charleston Southern
at Wisconsin
South Alabama
at Mississippi State
Idaho
at New Mexico State
at Appalachian State
Louisiana–Monroe
Georgia Southern
at Georgia State
at Louisiana–Lafayette
References
Troy
Troy Trojans football seasons
Troy Trojans football |
2197 | https://en.wikipedia.org/wiki/Amstrad%20CPC | Amstrad CPC | The Amstrad CPC (short for Colour Personal Computer) is a series of 8-bit home computers produced by Amstrad between 1984 and 1990. It was designed to compete in the mid-1980s home computer market dominated by the Commodore 64 and the Sinclair ZX Spectrum, where it successfully established itself primarily in the United Kingdom, France, Spain, and the German-speaking parts of Europe.
The series spawned a total of six distinct models: The CPC464, CPC664, and CPC6128 were highly successful competitors in the European home computer market. The later 464plus and 6128plus, intended to prolong the system's lifecycle with hardware updates, were considerably less successful, as was the attempt to repackage the plus hardware into a game console as the GX4000.
The CPC models' hardware is based on the Zilog Z80A CPU, complemented with either 64 or 128 KB of RAM. Their computer-in-a-keyboard design prominently features an integrated storage device, either a compact cassette deck or 3 inch floppy disk drive. The main units were only sold bundled with either a colour, green-screen or monochrome monitor that doubles as the main unit's power supply. Additionally, a wide range of first and third-party hardware extensions such as external disk drives, printers, and memory extensions, was available.
The CPC series was pitched against other home computers primarily used to play video games and enjoyed a strong supply of game software. The comparatively low price for a complete computer system with dedicated monitor, its high-resolution monochrome text and graphic capabilities and the possibility to run CP/M software also rendered the system attractive for business users, which was reflected by a wide selection of application software.
During its lifetime, the CPC series sold approximately three million units.
Models
The original range
The philosophy behind the CPC series was twofold, firstly the concept was of an "all-in-one", where the computer, keyboard and its data storage device were combined in a single unit and sold with its own dedicated display monitor. Most home computers at that time such as ZX Spectrum series, Commodore 64, and BBC Micro relied on the use of the domestic television set and a separately connected tape recorder or disk drive. In itself, the all-in-one concept was not new, having been seen before on business-oriented machines and the Commodore PET, but in the home computer space, it predated the Macintosh by almost a year.
Secondly, Amstrad founder Alan Sugar wanted the machine to resemble a "real computer, similar to what someone would see being used to check them in at the airport for their holidays", and for the machine to not look like "a pregnant calculator" – in reference presumably to the Sinclair ZX81 and ZX Spectrum with their low cost, membrane-type keyboards.
CPC 464
The CPC 464 was one of the most successful computers in Europe and sold more than two million units.
The CPC 464 featured 64 KB RAM and an internal cassette deck. It was introduced in June 1984 in the UK. Initial suggested retail prices for the CPC464 were GBP£249.00/DM899.00 with a green screen and GBP£359.00/DM1398.00 with a colour monitor. Following the introduction of the CPC6128 in late 1985, suggested retail prices for the CPC464 were cut by GBP£50.00/DM100.00.
In 1990, the 464plus replaced the CPC 464 in the model line-up, and production of the CPC 464 was discontinued.
CPC664
The CPC664 features 64 KB RAM and an internal 3-inch floppy disk drive. It was introduced on 25 April 1985 in the UK. Initial suggested retail prices for the CPC664 were GBP£339.00/DM1198.00 with a green screen and GBP£449.00/DM1998.00 with a colour monitor.
After the successful release of the CPC464, consumers were constantly asking for two improvements: more memory and an internal disk drive. For Amstrad, the latter was easier to realise. At the deliberately low-key introduction of the CPC664, the machine was positioned not only as the lowest-cost disk system but even the lowest-cost CP/M 2.2 machine. In the Amstrad CPC product range the CPC664 complemented the CPC464 which was neither discontinued nor reduced in price.
Compared to the CPC464, the CPC664's main unit has been significantly redesigned, not only to accommodate the floppy disk drive but also with a redesigned keyboard area. Touted as "ergonomic" by Amstrad's promotional material, the keyboard is noticeably tilted to the front with MSX-style cursor keys above the numeric keypad. Compared to the CPC464's multicoloured keyboard, the CPC664's keys are kept in a much quieter grey and pale blue colour scheme.
The back of the CPC664 main unit features the same connectors as the CPC464, with the exception of an additional 12V power lead. Unlike the CPC464's cassette tape drive that could be powered off the main unit's 5V voltage, the CPC664's floppy disk drive requires an additional 12V voltage. This voltage had to be separately supplied by an updated version of the bundled green screen/colour monitor (GT-65 and CTM-644 respectively).
The CPC664 was only produced for approximately six months. In late 1985, when the CPC6128 was introduced in Europe, Amstrad decided not to keep three models in the line-up, and production of the CPC664 was discontinued.
CPC6128
The CPC6128 features 128 KB RAM and an internal 3-inch floppy disk drive. Aside from various hardware and firmware improvements, one of the CPC6128's most prominent features is the compatibility with the CP/M+ operating system that rendered it attractive for business uses.
The CPC6128 was released on 13 June 1985 and initially only sold in the US. Imported and distributed by Indescomp, Inc. of Chicago, it was the first Amstrad product to be sold in the United States, a market that at the time was traditionally hostile towards European computer manufacturers. Two months later, on 15 August 1985, it arrived in Europe and replaced the CPC664 in the CPC model line-up. Initial suggested retail prices for the CPC6128 were US$699.00/£299.00/DM1598.00 with a green screen and US$799.00/£399.00/DM2098.00 with a colour monitor.
In 1990, the 6128plus replaced the CPC6128 in the model line-up, and production of the CPC6128 was discontinued.
The plus range
In 1990, confronted with a changing home computer market, Amstrad decided to refresh the CPC model range by introducing a new range variantly labelled plus or PLUS, 1990, or CPC+ range. The main goals were numerous enhancements to the existing CPC hardware platform, to restyle the casework to provide a contemporary appearance, and to add native support of cartridge media. The new model palette includes three variants, the 464plus and 6128plus computers and the GX4000 video game console. The "CPC" abbreviation was dropped from the model names.
The redesign significantly enhanced the CPC hardware, mainly to rectify its previous shortcomings as a gaming platform. The redesigned video hardware allows for hardware sprites and soft scrolling, with a colour palette extended from a maximum of 16 colours (plus separately definable border) at one time from a choice of 27, increased to a maximum of 31 (16 for background and 15 for hardware sprites) out of 4096. The enhanced sound hardware offers automatic DMA transfer, allowing more complex sound effects with a significantly reduced processor overhead. Other hardware enhancements include the support of analogue joysticks, 8-bit printers, and ROM cartridges up to 4 Mbits.
The new range of models was intended to be completely backwards compatible with the original CPC models. Its enhanced features are only available after a deliberately obscure unlocking mechanism has been triggered, thus preventing existing CPC software from accidentally invoking them.
Despite the significant hardware enhancements, many viewed it as outdated, being based on an 8-bit CPU, and it failed to attract both customers and software producers who were moving towards systems such as the Commodore Amiga and Sega Mega Drive which was launched a few short months after the plus range. The plus range was a commercial failure, and production was discontinued shortly after its introduction in 1990.
464plus, 6128plus
The 464plus and 6128plus models were intended as "more sophisticated and stylish" replacements of the CPC464 and CPC6128. Based on the redesigned plus hardware platform, they share the same base characteristics as their predecessors: The 464plus is equipped with 64 KB RAM and a cassette tape drive, the 6128plus features 128 KB RAM and a 3" floppy disk drive. Both models share a common case layout with a keyboard taken over from the CPC6128 model, and the respective mass storage drive inserted in a case breakout.
In order to simplify the EMC screening process, the edge connectors of the previous models have been replaced with micro-ribbon connectors as previously used on the German Schneider CPC6128. As a result, a wide range of extensions for the original CPC range are connector-incompatible with the 464plus and 6128plus. In addition, the 6128plus does not have a tape socket for an external tape drive.
The plus range is not equipped with an on-board ROM, and thus the 464plus and the 6128plus do not contain a firmware. Instead, Amstrad provided the firmware for both models via the ROM extension facility, contained on the included Burnin' Rubber and Locomotive BASIC cartridge. This resulted in reduced hardware localization cost (only some select key caps and case labels had to be localized) with the added benefit of a rudimentary copy protection mechanism (without a firmware present, the machine itself could not copy a game cartridge's content). As the enhanced V4 firmware's structural differences causes problems with some CPC software directly calling firmware functions by their memory addresses, Amstrad separately sold a cartridge containing the original CPC6128's V3 firmware.
Both the 464plus and the 6128plus were introduced to the public in September 1990. Initial suggested retail prices were GBP£229/FRF1990 with a monochrome monitor and GBP£329/FRF2990 with a colour monitor for the 464plus, and GBP£329/FRF2990 with a monochrome monitor and GBP£429/FRF3990 with a colour monitor for the 6128plus.
GX4000
Developed as part of the plus range, the GX4000 was Amstrad's short-lived attempt to enter the video game consoles market. Sharing the plus range's enhanced hardware characteristics, it represents the bare minimum variant of the range without a keyboard or support for mass storage devices. It came bundled with 2 paddle controllers and the racing game Burnin' Rubber.
Special models and clones
CPC472
During the August holidays of 1985, Spain briefly introduced an import tax of 15 000 pesetas () on computers containing 64KB or less of RAM (Royal Decree 1215/1985 and 1558/1985), and a new law (Royal Decree 1250/1985) mandated that all computers sold in Spain must have a Spanish keyboard. To circumvent this, Amstrad's Spanish distributor Indescomp (later to become Amstrad Spain) created and distributed the CPC472, a modified version of the CPC464. Its main differences are a small additional daughter board containing a CPC664 ROM chip and an 8 KB memory chip, and a keyboard with a ñ key (although some of them were temporarily manufactured without the ñ key). The sole purpose of the 8 KB memory chip (which is not electrically connected to the machine, so consequently rendered unusable) is to increase the machine's total memory specs to 72 KB in order to circumvent the import tax. Some months later, Spain joined the European Communities by the Treaty of Accession 1985 and the import tax was suppressed, so Amstrad added the ñ key for the 464 and production of the CPC472 was discontinued.
KC compact
The ("" - which means "small computer" - being a rather literal German translation of the English "microcomputer") is a clone of the Amstrad CPC built by East Germany's in October 1989. Although the machine included various substitutes and emulations of an Amstrad CPC's hardware, the machine is largely compatible with Amstrad CPC software. It is equipped with 64 KB memory and a CPC6128's firmware customized to the modified hardware, including an unmodified copy of Locomotive BASIC 1.1. The KC compact is the last 8-bit computer produced in East Germany.
Aleste 520EX
In 1993, Omsk, Russia based company Patisonic released the Aleste 520EX, a computer highly compatible with the Amstrad CPC6128. It could also be switched into an MSX mode. An expansion board named Magic Sound allowed to play Scream Tracker files.
Reception
A BYTE columnist in January 1985 called the CPC 464 "the closest yet to filling" his criteria for a useful home computer, including good keyboard, 80-column text, inexpensive disk drive, and support for a mainstream operating system like CP/M.
Hardware
Processor
The entire CPC series is based on the Zilog Z80A processor, clocked at 4 MHz.
In order to avoid the CPU and the video logic simultaneously accessing the shared main memory and causing video corruption ("snowing"), CPU memory access is constrained to occur on microsecond boundaries. This effectively pads every machine cycle to four clock cycles, causing a minor loss of processing power and resulting in what Amstrad estimated to be an "effective clock rate" of "approximately 3.3 MHz."
Memory
Amstrad CPCs are equipped with either 64 (CPC464, CPC664, 464plus, GX4000) or 128 (CPC6128, 6128plus) KB of RAM. This base memory can be extended by up to 512 KB using memory expansions sold by third-party manufacturers, and by up to 4096 KB using experimental methods developed by hardware enthusiasts. Because the Z80 processor is only able to directly address 64 KB of memory, additional memory from the 128 KB models and memory expansions is made available using bank switching.
Video
Underlying a CPC's video output is the unusual pairing of a CRTC (Motorola 6845 or compatible) with a custom-designed gate array to generate a pixel display output. CPC6128s later in production as well as the models from the plus range integrate both the CRTC and the gate array's functions with the system's ASIC.
Three built-in display resolutions are available: 160×200 pixels with 16 colours ("Mode 0", 20 text columns), 320×200 pixels with 4 colours ("Mode 1", 40 text columns), and 640×200 pixels with 2 colours ("Mode 2", 80 text columns). Increased screen size can be achieved by reprogramming the CRTC.
The original CPC video hardware supports a colour palette of 27 colours, generated from RGB colour space with each colour component assigned as either off, half on, or on (3 level RGB palette). The plus range extended the palette to 4096 colours, also generated from RGB with 4 bits each for red, green and blue (12-bit RGB).
With the exception of the GX4000, all CPC models lack an RF television or composite video output and instead shipped with a 6-pin RGB DIN connector, also used by Acorn computers, to connect the supplied Amstrad monitor. This connector delivers a 1v p-p analogue RGB with a 50 Hz composite sync signal that, if wired correctly, can drive a 50 Hz SCART television. External adapters for RF television were available as a first-party hardware accessory.
Audio
The CPC uses the General Instrument AY-3-8912 sound chip, providing three channels, each configurable to generate square waves, white noise or both. A small array of hardware volume envelopes are available.
Output is provided in mono by a small (4 cm) built-in loudspeaker with volume control, driven by an internal amplifier. Stereo output is provided through a headphones jack.
It is possible to play back digital sound samples at a resolution of approximately 5-bit by sending a stream of values to the sound chip. This technique is very processor-intensive and hard to combine with any other processing. Examples are the title screens or other non-playable scenes of games like Chase H.Q., Meltdown, and RoboCop. The later Plus models incorporated a DMA engine in order to offload this processing.
Floppy disk drive
Amstrad uses Matsushita's 3" floppy disk drive [ref: CPCWiki], which was comptible with Hitachi's existing 3" floppy disk format. The chosen drive (built-in for later models) is a single-sided 40-track unit that requires the user to remove and flip the disk to access the other side. Each side has its own independent write-protect switch. The sides are termed "A" and "B", with each one commonly formatted to 180 KB (in AMSDOS format, comprising 2 KB directory and 178 KB storage) for a total of 360 KB per disk.
The interface with the drives is a NEC 765 FDC, used for the same purpose in the IBM PC/XT, PC/AT and PS/2 machines. Its features are not fully used in order to cut costs, namely DMA transfers and support for single density disks; they were formatted as double density using modified frequency modulation.
Discs were shipped in a paper sleeve or a hard plastic case resembling a compact disc "jewel" case. The casing is thicker and more rigid than that of 3.5 inch diskettes, and designed to be mailed without any additional packaging. A sliding metal cover to protect the media surface is internal to the casing and latched, unlike the simple external sliding cover of Sony's version. They were significantly more expensive than both 5.25 inch and 3.5 inch alternatives. This, combined with their low nominal capacities and their essentially proprietary nature, led to the format being discontinued shortly after the CPC itself was discontinued.
Apart from Amstrad's other 3 inch machines (the PCW and the ZX Spectrum +3), the few other computer systems to use them included the Sega SF-7000 and CP/M systems such as the Tatung Einstein and Osborne machines. They also found use on embedded systems.
The Shugart-standard interface means that Amstrad CPC machines are able to use standard 3", 3½" or 5¼" drives as their second drive. Programs such as ROMDOS and ParaDOS extend the standard AMSDOS system to provide support for double-sided, 80-track formats, enabling up to 800 KB to be stored on a single disk.
The 3 inch disks themselves are usually known as "discs" on the CPC, following the spelling on the machine's plastic casing and conventional British English spelling.
Expansion
The hardware and firmware was designed to be able to access software provided on external ROMs. Each ROM has to be a 16 kB block and is switched in and out of the memory space shared with the video RAM. The Amstrad firmware is deliberately designed so that new software could be easily accessed from these ROMs with a minimum of fuss. Popular applications were marketed on ROM, particularly word processing and programming utility software (examples are Protext and Brunword of the former, and the MAXAM assembler of the latter type).
Such extra ROM chips do not plug directly into the CPC itself, but into extra plug-in "rom boxes" which contain sockets for the ROM chips and a minimal amount of decoding circuitry for the main machine to be able to switch between them. These boxes were either marketed commercially or could be built by competent hobbyists and they attached to the main expansion port at the back of the machine. Software on ROM loads much faster than from disc or tape and the machine's boot-up sequence was designed to evaluate ROMs it found and optionally hand over control of the machine to them. This allows significant customisation of the functionality of the machine, something that enthusiasts exploited for various purposes. However, the typical users would probably not be aware of this added ROM functionality unless they read the CPC press, as it is not described in the user manual and was hardly ever mentioned in marketing literature. It is, however, documented in the official Amstrad firmware manual.
The machines also feature a 9-pin Atari joystick port that will either directly take one joystick, or two joysticks by use of a splitter cable.
Peripherals
RS232 serial adapters
Amstrad issued two RS-232-C D25 serial interfaces, attached to the expansion connector on the rear of the machine, with a through-connector for the CPC464 disk drive or other peripherals.
The original interface came with a Book of Spells for facilitating data transfer between other systems using a proprietary protocol in the device's own ROM, as well as terminal software to connect to British Telecom's Prestel service. A separate version of the ROM was created for the U.S. market due to the use of the commands "|SUCK" and "|BLOW", which were considered unacceptable there.
Software and hardware limitations in this interface led to its replacement with an Amstrad-branded version of a compatible alternative by Pace. Serial interfaces were also available from third-party vendors such as KDS Electronics and Cirkit.
Software
BASIC and operating system
Like most home computers at the time, the CPC has its OS and a BASIC interpreter built in as ROM. It uses Locomotive BASIC - an improved version of Locomotive Software's Z80 BASIC for the BBC Microcomputer co-processor board. It is particularly notable for providing easy access to the machine's video and audio resources in contrast to the POKE commands required on generic Microsoft implementations. Other unusual features include timed event handling with the AFTER and EVERY commands, and text-based windowing.
CP/M
Digital Research's CP/M operating system was supplied with the 664 and 6128 disk-based systems, and the DDI-1 disk expansion unit for the 464. 64k machines shipped with CP/M 2.2 alone, while the 128k machines also include CP/M 3.1. The compact CP/M 2.2 implementation is largely stored on the boot sectors of a 3" disk in what was called "System format"; typing |CPM from Locomotive BASIC would load code from these sectors, making it a popular choice for custom game loading routines. The CP/M 3.1 implementation is largely in a separate file which is in turn loaded from the boot sector.
Much public domain CP/M software was made available for the CPC, from word-processors such as VDE to complete bulletin board systems such as ROS.
Other languages
Although it was possible to obtain compilers for Locomotive BASIC, C and Pascal, the majority of the CPC's software was written in native Z80 assembly language. Popular assemblers were Hisoft's Devpac, Arnor's Maxam, and (in France) DAMS. Disk-based CPC (not Plus) systems shipped with an interpreter for the educational language LOGO, booted from CP/M 2.2 but largely CPC-specific with much code resident in the AMSDOS ROM; 6128 machines also include a CP/M 3.1, non-ROM version. A C compiler was also written and made available for the European market through Tandy Europe, by Micro Business products.
Roland
In an attempt to give the CPC a recognisable mascot, a number of games by Amstrad's in-house software publisher Amsoft have been tagged with the Roland name. However, as the games had not been designed around the Roland character and only had the branding added later, the character design varies immensely, from a spiky-haired blonde teenager (Roland Goes Digging) to a white cube with legs (Roland Goes Square Bashing) or a mutant flea (Roland in the Caves). The only two games with similar gameplay and main character design are Roland in Time and its sequel Roland in Space. The Roland character was named after Roland Perry, one of the lead designers of the original CPC range.
Schneider Computer Division
In order to market its computers in Germany, Austria, and Switzerland where Amstrad did not have any distribution structures, Amstrad entered a partnership with Schneider Rundfunkwerke AG, a German company that - very much like Amstrad itself - was previously only known for value-priced audio products. In 1984, Schneider's Schneider Computer Division daughter company was created specifically for the task, and the complete Amstrad CPC line-up was branded and sold as Schneider CPC.
Although they are based on the same hardware, the Schneider CPC models differ from the Amstrad CPC models in several details. Most prominently, the Schneider CPC464 and CPC664 keyboards featured grey instead of coloured keys, but still in the original British keyboard layout. To achieve a German "QWERTZ" keyboard layout, Schneider marketed a small software program to reassign the keys as well as sticker labels for the keys. In order to conform with stricter German EMC regulations, the complete Schneider CPC line-up is equipped with an internal metal shielding. For the same reason, the Schneider CPC6128 features micro ribbon type connectors instead of edge connectors. Both the greyscale keyboard and the micro ribbon connectors found their way up into the design of later Amstrad CPC models.
In 1988, after Schneider refused to market Amstrad's AT-compatible computer line, the cooperation ended. Schneider went on to sell the remaining stock of Schneider CPC models and used their now well-established market position to introduce its own PC designs. With the formation of its German daughter company Amstrad GmbH to distribute its product lines including the CPC464 and CPC6128, Amstrad attempted but ultimately failed to establish their own brand in the German-speaking parts of Europe.
Community
The Amstrad CPC enjoyed a strong and long lifetime, mainly due to the machines use for businesses as well as gaming. Dedicated programmers continued working on the CPC range, even producing graphical user interface (GUI) operating systems such as SymbOS. Internet sites devoted to the CPC have appeared from around the world featuring forums, news, hardware, software, programming and games. CPC Magazines appeared during the 1980s including publications in countries such as Britain, France, Spain, Germany, Denmark, Australia, and Greece. Titles included the official Amstrad Computer User publication, as well as independent titles like Amstrad Action, Amtix!, Computing with the Amstrad CPC, CPC Attack, Australia's The Amstrad User, France's Amstrad Cent Pour Cent and Amstar. Following the CPC's end of production, Amstrad gave permission for the CPC ROMs to be distributed freely as long as the copyright message is not changed and that it is acknowledged that Amstrad still holds copyright, giving emulator authors the possibility to ship the CPC firmware with their programs.
Influence on other Amstrad machines
Amstrad followed their success with the CPC 464 by launching the Amstrad PCW word-processor range, another Z80-based machine with a 3" disk drive and software by Locomotive Software. The PCW was originally developed to be partly compatible with an improved version of the CPC (ANT, or Arnold Number Two - the CPC's development codename was Arnold). However, Amstrad decided to focus on the PCW, and the ANT project never came to market.
On 7 April 1986, Amstrad announced it had bought from Sinclair Research "...the worldwide rights to sell and manufacture all existing and future Sinclair computers and computer products, together with the Sinclair brand name and those intellectual property rights where they relate to computers and computer-related products." which included the ZX Spectrum, for £5 million. This included Sinclair's unsold stock of Sinclair QLs and Spectrums. Amstrad made more than £5 million on selling these surplus machines alone. Amstrad launched two new variants of the Spectrum: the ZX Spectrum +2, based on the ZX Spectrum 128, with a built-in tape drive (like the CPC 464) and, the following year, the ZX Spectrum +3, with a built-in floppy disk drive (similar to the CPC 664 and 6128), taking the 3" discs that Amstrad CPC machines used.
Production Timeline
See also
Amstrad CPC character set
Amstrad CP/M Plus character set
List of Amstrad CPC emulators
List of Amstrad CPC games
GX4000
SymbOS (multitasking operating system)
Notes and references
External links
CPC-Wiki (CPC specific Wiki containing further information)
Unofficial Amstrad WWW Resource
New OS for the CPC
Computer-related introductions in 1984
CPC
Z80-based home computers
Computers designed in the United Kingdom |
26296277 | https://en.wikipedia.org/wiki/Fandom%20%28website%29 | Fandom (website) | Fandom (also known as Wikia before October 2016) is a wiki hosting service which hosts wikis on entertainment (i.e. video game and movie wikis). Its domain is operated by Fandom, Inc. (formerly known as Wikia, Inc.), a for-profit Delaware company founded in October 2004 by Jimmy Wales and Angela Beesley. Fandom was acquired in 2018 by TPG Capital and Jon Miller through Integrated Media Co.
Fandom uses MediaWiki, the open-source wiki software used by Wikipedia. Fandom, Inc. derives its income from advertising and sold content, publishing most user-provided text under copyleft licenses. The company also runs the associated Fandom editorial project, offering pop-culture and gaming news. Fandom wikis are hosted under the domain fandom.com, but some, especially those that focus on subjects other than media franchises, were hosted under wikia.org until November 2021.
History
2004–2009: Early days and growth
Fandom was launched on October 18, 2004, at 23:50:49 (UTC) under the name Wikicities (which invited comparisons to Yahoo's GeoCities), by Jimmy Wales and Angela Beesley Starling—respectively Chairman Emeritus and Advisory Board member of the Wikimedia Foundation. The name of the project was changed to Wikia on March 27, 2006. In the month before the move, Wikia announced a US$4 million venture capital investment from Bessemer Venture Partners and First Round Capital. Nine months later, Amazon.com invested US$10 million in Series B funding.
By September 2006, it had approximately 1,500 wikis in 48 languages. Over time, Wikia has incorporated formerly independent wikis such as LyricWiki, Nukapedia, Uncyclopedia, and WoWWiki. Gil Penchina described Wikia early on as "the rest of the library and magazine rack" to Wikipedia's encyclopedia. The material has also been described as informal, and often bordering on entertainment, allowing the importing of maps, YouTube videos, and other non-traditional wiki material.
2010–2015: New management
By 2010, wikis could be created in 188 different languages. In October 2011, Craig Palmer, the former CEO of Gracenote, replaced Penchina as CEO. In February 2012, co-founder Beesley Starling left Wikia to launch a startup called ChalkDrop.com. At the end of November 2012, Wikia raised US$10.8 million in Series C funding from Institutional Venture Partners and previous investors Bessemer Ventures Partners and Amazon.com. Another $15 million was raised in August 2014 for Series D funding, with investors Digital Garage, Amazon, Bessemer Venture Partners, and Institutional Venture Partners. The total raised at this point was $39.8 million.
On March 4, 2015, Wikia appointed Walker Jacobs, former executive vice-president of Turner Broadcasting System, to the new position of chief operating officer. In December 2015, Wikia launched the Fan Contributor Program.
2016–2018: Fandom brand
On January 25, 2016, Wikia launched a new entertainment news site named Fandom.
On October 4, 2016, Wikia.com was renamed "Fandom powered by Wikia", to better associate itself with the Fandom website. Wikia, Inc. remained under its current name, and the homepage of Wikia.com was moved to wikia.com/fandom.
In December 2016, Wikia appointed Dorth Raphaely, former general manager of Bleacher Report, as chief content officer.
2018–present: New acquisitions and inclusivity
In February 2018, former AOL CEO Jon Miller, backed by Private equity firm TPG Capital, acquired Fandom. Miller was named Co-chairman of Wikia, Inc., alongside Jimmy Wales, and TPG Capital director Andrew Doyle assumed the role of interim CEO.
In July 2018, Fandom purchased Screen Junkies from Defy Media, and in December of that year, they had acquired the media assets of Curse LLC, including wiki services Gamepedia, D&D Beyond, Futhead, Muthead, and Straw Poll.
In February 2019, former StubHub CEO Perkins Miller took over as CEO, and Wikia fully changed its domain name to fandom.com. Various Wikis had been testing with new domain during 2018, with some wikis that focused on "more serious topics" instead having their domains changed to wikia.org instead.
In June 2019, Fandom began an effort to rewrite its core platform, which was written based on MediaWiki version 1.19, to base it off a newer version of the software. On March 11, 2020, Fandom released the Unified Community Platform (UCP), based on MediaWiki 1.33, for newly created wikis.
In November 2020, Fandom began to migrate Gamepedia wikis to a fandom.com domain as part of their search engine optimization strategy, with migrations continuing into 2021.
In February 2021, Fandom acquired Focus Multimedia, the retailer behind Fanatical, an e-commerce platform that sells digital games, ebooks and other products related to gaming.
In late March 2021, Fandom updated its terms of use policy to prohibit deadnaming transgender individuals across their websites. This policy was in response to a referendum on the Star Wars wiki Wookieepedia to ban deadnaming, which triggered a debate around an article about the non-binary artist Robin Pronovost. In response to the deadnaming controversy, Fandom also introduced new LGBT guidelines across its websites in late June 2021 which include links to queer-inclusive and trans support resources.
In June 2021, Fandom began to rollout FandomDesktop, a redesigned theme for desktop devices, with plans to retire its legacy Oasis and Hydra skins once the rollout was complete. Two months later on August 3, Fandom rolled out a new look, new colors, new logo, and introduced a new tagline, "For the love of fans."
In late November/early December 2021, all remaining wikis under the wikia.org domain migrated to the fandom.com domain.
Services and features
Present
Wikis
Fandom communities consist of online encyclopedias, each one specializing in a particular subject. Although Fandom allows almost anything to be the main focus of a wiki, the most common interest of its users is in popular fiction franchises of films, TV shows, games, books, and other media, due to the considerable limitation of such detailed information by Wikipedia's notability policies. This contributed to the service being renamed to Fandom.
The main purpose of articles in a Fandom community is to cover information and discussion on a particular topic in a much greater and more comprehensive detail level than what can be found on Wikipedia articles. For example, Spiteful Crow, an enemy character in EarthBound, may have its own article on the EarthBound Fandom, whereas the character may not be considered notable enough for a Wikipedia page. Also, the writing style is mostly directed to those familiar with specific vocabulary and terminology rather than to the lay and general public of Wikipedia; the Harry Potter wiki, for example, is written from the perspective of everything in the franchise universe being real, thus the article about the character Ronald Weasley starts by describing the subject as "a pure-blood wizard, the sixth and youngest son of Arthur and Molly Weasley" instead of "a character in the Harry Potter series".
Other examples of content that is generally considered beyond the scope of information of Wikipedia articles includes Fandom information about video games and related video game topics, detailed instructions, gameplay details, plot details, and so forth. Gameplay concepts can also have their own articles. Fandom also allows wikis to have a point of view, rather than the neutral POV that is required by Wikipedia (although NPOV is a local policy on many Fandom communities).
The image policies of Fandom communities tend to be more lenient than those of Wikimedia Foundation projects, allowing articles with much more illustration. Fandom requires all user text content to be published under a free license; most use the Creative Commons Attribution-ShareAlike license, although a few wikis use a non-free licence with a noncommercial clause (for instance Memory Alpha, Uncyclopedia and others) and some use the GNU Free Documentation License. Fandom's Terms of Use forbid hate speech, libel, pornography, or copyright infringement. Material is allowed, as long as the added material does not duplicate Wikimedia Foundation projects.
Wikis are also not owned by their founders, nor does the founder's opinion carry more weight in disagreements than any other user's opinion. Consensus and cooperation should be the primary means for organizing a community. However, Fandom may take decisions affecting the community even if there is no consensus at all.
Technology
Fandom uses a heavily modified version of MediaWiki software, based on the version 1.33 of MediaWiki, which was officially marked as obsolete in June 2020. It has more than 250 extensions installed, most of them created by their staff of developers, to add social features like blogs, chat, badges, forums, and multimedia, but also remove features like advanced user options or skins. The personal choice of using the Monobook skin instead of the default custom skin was removed on May 25, 2018, alluding GDPR compliance.
In August 2016, Fandom announced it would switch to a Service Oriented Architecture. It has also removed many custom extensions and functionality for specific wiki, has created certain features to fill those needs.
Entertainment news
In 2016, Wikia launched Fandom, an online entertainment media website. The program utilizes volunteer contributors called "Fandom Contributors" to produce articles, working alongside an editorial team employed by Wikia. In contrast to the blogging feature of individual wiki communities, Fandom focuses on pop culture and fan topics such as video games, movies, and television shows. The project features fan opinions, interviews with property creators, reviews, and how-to guides. Fandom also includes videos and specific news coverage sponsored or paid for by a property creator to promote their property.
In the same year, it was also announced that the entire Wikia platform would be rebranded under the Fandom name on October 4, 2016. A leak from Fandom's Community Council was posted to Reddit's /r/Wikia subreddit on August 12, 2018, confirming that Fandom would be migrating all wikis from the wikia.com domain, to fandom.com in early 2019, as part of a push for greater adoption of Fandom's wiki-specific applications on both iOS and Android's app ecosystems. The post was later deleted.
Wiki partnerships
Fandom has created several official partnerships to create wikis, vetted by the corporation as being the "official" encyclopedia or wiki of a property. In 2014, Fandom partnered with Roddenberry Enterprises to create the Trek Initiative, a Fandom hosted wiki community site that features video interviews, promotions, and other material about Star Trek to celebrate its 50th anniversary. In 2013, Fandom partnered with SOE (now called Daybreak Games) to create official wikis for several of their games such as Free Realms, PlanetSide 2, and the EverQuest franchise. Fandom made similar partnerships with 2K Games during the launch of Civilization: Beyond Earth and Warner Bros Interactive for Shadow of Mordor. Fandom also has partnerships with Lionsgate Media to promote Starz and Film franchises through wiki content, fandom articles, and advertisements.
Questions and answers site
In January 2009, the company created a question and answer website named "Wikianswers", not to be confused with the preexisting WikiAnswers. In March 2010, Fandom re-launched "Answers from Wikia", where users could create topic-specialized knowledge market wikis based upon Fandom's own Wikianswers subdomain.
Esports
After controversy regarding their previous attempts to reach gamers via Twitch streams, in 2021 the United States Navy hired Fandom to manage and promote esports tournaments and streams on Twitch.
Past services
OpenServing
OpenServing was a short-lived Web publishing project owned by Fandom, founded on December 12, 2006, and abandoned, unannounced, in January 2008. Like Fandom, OpenServing was to offer free wiki hosting, but it would differ in that each wiki's founder would retain any revenue gained from advertising on the site. OpenServing used a modified version of the Wikimedia Foundation's MediaWiki software created by ArmchairGM, but was intended to branch out to other open source packages.
According to Fandom co-founder and chairman Jimmy Wales, the OpenServing site received several thousand applications in January 2007. However, after a year, no sites had been launched under the OpenServing banner.
ArmchairGM
ArmchairGM was a sports forum and wiki site created by Aaron Wright, Dan Lewis, Robert Lefkowitz and developer David Pean. Launched in early 2006, the site was initially US-based, but sought to improve its links to sports associated with Britain over its first year. Its MediaWiki-based software included a Digg-style article-voting mechanism, blog-like comment forms with "thumbs up/down" user feedback, and the ability to write multiple types of posts (news, opinions, or "locker room" discussion entries).
In late 2006, the site was bought by Fandom for $2 million. After the purchase was made, the former owners applied ArmchairGM's architecture to other Fandom sites. From September 2010 to February 2011, Fandom absorbed the site's encyclopedia articles and blanked all of its old blog entries, effectively discontinuing ArmchairGM in its original form.
Search engines
Wikia, Inc. initially proposed creating a copyleft search engine; the software (but not the site) was named "Wikiasari" by a November 2004 naming contest. The proposal became inactive in 2005. The "public alpha" of Wikia Search web search engine was launched on January 7, 2008, from the USSHC underground data center. This roll-out version of the search interface was roundly panned by reviewers in technology media.
The project was ended in March 2009. Late in 2009, a new search engine was established to index and display results from all sites hosted on Fandom.
Controversies
Advertising controversies
Fandom communities have complained of inappropriate advertisements, or advertising in the body text area. There is no easy way for individual communities to switch to conventional paid hosting, as Fandom usually owns the relevant domain names. If a community leaves Fandom for new hosting, the company typically continues to operate the abandoned wiki using its original name and content, adversely affecting the new wiki's search rankings, for advertising revenue.
Relationship with Wikipedia
In the 2000s, Fandom has been accused of unduly profiting from a perceived association with Wikipedia. Although Fandom has been referred to in the media as "the commercial counterpart to the non-profit Wikipedia", Wikimedia and Fandom staff call this description inaccurate.
In 2006, the Wikimedia Foundation shared hosting and bandwidth costs with Fandom, and received some donated office space from Fandom during the fiscal year ending June 30, 2006. At the end of the fiscal year 2007, Fandom owed the foundation US$6,000. In June 2007, two members of the foundation's board of directors also served as employees, officers, or directors of Fandom. In January 2009, Fandom subleased two conference rooms to the Wikimedia Foundation for the Wikipedia Usability Initiative. According to a 2009 email by Erik Möller, deputy director of the Wikimedia Foundation:
We obtained about a dozen bids...We used averaging as a way to arrive at a fair market rate to neither advantage nor disadvantage Wikia when suggesting a rate. The averaging also resulted in a rate that was roughly equivalent to the most comparable space in the running.
Fandom, Inc.
Fandom, Inc. is headquartered at 360 Third Street, in San Francisco, California. The company was incorporated in Florida in December 2004 and re-incorporated in Delaware as Wikia, Inc. on January 10, 2006.
Fandom has technical staff in the US, but also has an office in Poznań, Poland, where the primary engineering functions are performed.
Fandom derives income from advertising. The company initially used Google AdSense, but moved on to Federated Media before bringing ad management in-house. Alongside Fandom's in-house advertising, they continue to use AdSense as well as Amazon Ads and several other third party advertising services. Fandom additionally gains income from various partnerships oriented around various sweepstake sponsorships on related wikis.
Fandom has several other offices. International operations are based in Germany, and Asian operations and sales are conducted in Tokyo. Other sales offices are located in Chicago, Latin America, Los Angeles (marketing programming and content), New York City, and London.
See also
Comparison of wiki hosting services
Notes
References
External links
Free-content websites
MediaWiki websites
Wiki farms
American companies established in 2004
Internet properties established in 2004
Online publishing companies of the United States
Privately held companies based in California
Knowledge markets
South of Market, San Francisco
Wikis
2004 establishments in California
Jimmy Wales |
16928395 | https://en.wikipedia.org/wiki/Smartmatic | Smartmatic | Smartmatic (also referred as Smartmatic Corp. or Smartmatic International) or Smartmatic SGO Group is a multinational company that builds and implements electronic voting systems. The company also produces smart cities solutions (including public safety and public transportation), identity management systems for civil registration and authentication products for government applications.
History
Founding
In 1997, three engineers, Antonio Mugica, Alfredo José Anzola and Roger Piñate, began collaborating in a group while working at Panagroup Corp. in Caracas, Venezuela. Following the 2000 United States presidential election and its hanging chad controversy in Florida, the group proposed to dedicate a system toward electoral functions. Smartmatic was officially incorporated on 11 April 2000 in Delaware by Alfredo José Anzola. Smartmatic then established its headquarters in Boca Raton, Florida with seven employees. After receiving funds from private investors, the company then began to expand.
Expansion
Smartmatic was a little-known firm with no experience in voting technology before it was chosen by the Venezuelan authorities to replace the country's elections machinery ahead of a contentious referendum that confirmed Hugo Chávez as president in August 2004. Before the election, Smartmatic was part of a consortium that included a software company partly owned by a Venezuelan government agency. In March 2005, with a windfall of some $120 million from its first three contracts with Venezuela, Smartmatic then bought the much larger and more established Sequoia Voting Systems, which by 2006 had voting equipment installed in 17 states and the District of Columbia. On August 26, 2005, Sequoia Voting Systems announced that Mr. Jack Blaine would serve in the dual role as President of Sequoia Voting Systems and President of Sequoia's parent company, Smartmatic.
Sale of Sequoia Voting Systems
On November 8, 2007, Smartmatic announced that it was divesting ownership of the voting machine company Sequoia Voting Systems. However, in April 2008, Smartmatic still held a $2 million note from SVS Holdings, Inc., the management team which purchased Sequoia Voting Systems from Smartmatic, and at that time Sequoia's machines still used Smartmatic's intellectual property.
SGO Corporation
In 2014, Smartmatic's CEO Antonio Mugica and British Lord Mark Malloch-Brown announced the launching of the SGO Corporation Limited, a holding company based in London whose primary asset is the election technology and voting machine manufacturer. Lord Malloch-Brown became chairman of the board of directors of SGO since its foundation, while Antonio Mugica remained as CEO of the new venture. They were joined on SGO's board by Sir Nigel Knowles, Global CEO of DLA Piper, entrepreneur David Giampaolo and Roger Piñate, Smartmatic's COO and co-founder. Malloch-Brown stepped down as chair in December 2020.
The aim of SGO, according to its CEO was "to continue to make investments in its core business (election technology), but it is also set to roll out a series of new ventures based on biometrics, online identity verification, internet voting and citizen participation, e-governance and pollution control.”
Elections
The company was contracted in 2004 for the automation of electoral processes in Venezuela. Since 2004, its election technology has been used in local and national elections in Africa, Argentina, Belgium, Brazil, Chile, Ecuador, Italy Mexico, the Philippines, Singapore, the United Kingdom, the United States and Venezuela.
Africa
Smartmatic has operated in Uganda, Zambia and is still deploying an identity management project in Sierra Leone. In 2010, Smartmatic has worked with the United Nations Development Programme and Zambian authorities to modernise the voter registry using biometric technology. In 2016, they maintained the voter registry ahead of the elections. Smartmatic also assisted the Electoral Commission of Uganda to modernise its election processes to increase the transparency of the 2016 General Elections. The polling company supplied over 30,000 biometric machines across 28,010 polling stations, from the capital of Kampala to remote rural communities to verify the identity of over 15 million people.
Armenia
During the 2017 Armenian parliamentary election, a voter authentication system was used for the first time. The identity of the voter was validated prior to voting using Voter Authentication Devices (VADs), which contained an electronic copy of the voter lists. The introduction of new technologies in the electoral process was strongly supported by the opposition and civil society. Smartmatic provided 4,000 Voter Authentication Devices to the UNDP project “Support to the Electoral Process in Armenia” (SEPA). It was funded by the EU, United States, Germany, United Kingdom, and the Government of Armenia.
According to final reports from The International Elections Observation Missions (IEOM) "The VADs functioned effectively and without significant issues." Observers reported the introduction of the VADs was welcomed by most IEOM interlocutors as a useful tool for building confidence in the integrity of Election Day proceedings. Observers also mentioned in the final report that the late introduction of the VADs could have led to a limited time for testing of equipment and training of operators, stating "Observers noted some problems with scanning of ID documents and fingerprints; however, this did not lead to significant disruptions of voting. IEOM observers noted 9 cases of voters attempting multiple voting that were captured by the VADs. The VADs provided the possibility for voters to be redirected, in case they were registered in another polling station in the same TEC, and this was observed in 55 polling stations."
Belgium
Electronic voting in Belgium has been utilized since the 1991 Belgian general election, with the country being only one of the few European countries that use electronic voting. In 2012, Belgium approved a ten-year contract with Smartmatic to be the election technology supplier after an evaluation period of three years. In an evaluation by constitutional law researcher Carlos Vegas González, he stated that the printout ballot increased transparency and noted that Smartmatic's system was independently certified by PricewaterhouseCoopers.
Brazil
Smartmatic provided election technology services to Brazil's Superior Electoral Court (TSE) for the Brazilian Municipal Elections, 2012, Brazilian General Election, 2014 and Brazilian Municipal Elections, 2016 cycles.
In October 2012, Smartmatic provided election support for data and voice communications to 16 states in Brazil, and the Federal District (FD) (deploying 1,300 Broadband Global Area Network (BGAN) satellite devices), as well as support services to voting machines. These services implied hiring and training 14,000 technicians who worked at 480,000 polling stations. In 2014, the Brazilian electoral commission relied on an increased number of BGAN terminals, deployed by Smartmatic, to enable results transmission. BGAN satellite broadband voice and data service was used to connect voting stations to the nation's electronic voting system.
Estonia
In 2014, Smartmatic and Cybernetica, the Estonian IT lab that built the original Internet voting system used in the country, co-founded the Centre of Excellence for Internet voting. The centre is working with the government of Estonia to advance Internet voting on a global scale.
Estonia is the only country to run Internet voting on a wide scale, where citizens can access services through their eID card. The e-voting system, the largest run by any European Union country, was first introduced in 2005 for local elections, and was subsequently used in the 2007, 2011 and 2015 parliamentary elections, with the proportion of voters using this voting method rising from 5.5 per cent to 24.3 per cent to 30.5 per cent respectively.
Some experts have warned that Estonia's online voting system might be vulnerable to hacking. In 2014, J. Alex Halderman, an associate professor at the University of Michigan, and his group, described as being "harshly critical of electronic voting systems around the world", reviewed Estonia's voting system. Halderman described the Estonian "i-voting" system as "pretty primitive by modern standards ... I got to observe the processes that they went through, and there were just—it was just quite sloppy throughout the whole time". A security analysis of the system by the University of Michigan and the Open Rights Group that was led by Halderman found that "the I-voting system has serious architectural limitations and procedural gaps that potentially jeopardize the integrity of elections". The analysis concluded:The Estonian National Electoral Committee responded to the report, stating that the claims "were unsubstantiated and the described attacks infeasible." Before each election, the system is rebuilt from the ground up, and security testing including penetration testing and denial-of-service mitigation tests are carried out. In their statement, the Estonian National Electoral Committee says: "every aspect of online balloting procedures is fully documented, these procedures are rigorously audited, and video documenting all conducted procedures is posted online. In addition to opening every aspect of our balloting to observers, we have posted the source code of our voting software online. In the past decade, our online balloting has stood up to numerous reviews and security tests. We believe that online balloting allows us to achieve a level of security greater than what is possible with paper ballots".
Following the criticism, the number of Estonian e-voters at the 2015 Parliamentary Election was a record-breaking 176,491 (30.5% of votes cast).
Philippines
The adoption of Smartmatic was overseen by the Carter Center. Since its incorporation, random audits performed by the Commission on Elections (Comelec) resulted in an accuracy rate over 99.5% in all elections where Smartmatic equipment was utilized.
Smartmatic's entry into the Philippines was controversial. Several groups which were benefiting from the traditionally fraudulent conduct of Philippines polls found themselves facing great political and economic loss with the promised transparency and audit-ability of the automated elections system. The Manila Times stating that "only the truly uninformed would still find Smartmatic’s combination of PCOS/VCM and CCS an acceptable solution to the automation of Philippine elections" and that "glitches" as well as the "lack of transparency ... convinced us of the system’s unreliability and its vulnerability to tampering". Others supported Smartmatic's entry into the nation, with one group, the Concerned Citizens Movement, praising the company's performance after initially requesting Comelec to not use Smartmatic's systems.
2008 Philippine regional elections
On August 11, 2008, automated regional elections were held in the Philippines' Autonomous Region in Muslim Mindanao (ARMM). In the Maguindanao province, voters used Smartmatic's electronic voting machines, while voters in the other 5 provinces (Shariff Kabunsuan, Lanao del Sur, Basilan, Sulu, and Tawi-Tawi) used manually marked ballots processed using OMR technology. The overall reaction of both the public and authorities was positive toward the process.
2010 Philippine general election
In May 2010, Smartmatic automated the National Elections in the Republic of the Philippines. Election Day was Monday, May 10, 2010, with live, full coverage from ABS-CBN, ANC and GMA Network. Senator Benigno Aquino III succeeded Gloria Macapagal Arroyo as President, while Makati City mayor Jejomar Binay succeeded Noli de Castro as Vice President of the Philippines. Elected legislators of this year, together with the incumbent congresspersons from the 2007 elections, constitute the 15th Congress of the Philippines.
A survey conducted by the Social Weather Stations (SWS) showed that 75% of Filipinos questioned were satisfied with the conduct of the automated elections. The survey also showed that 70% of respondents were satisfied with Smartmatic.
2013 Philippine midterm elections
On 13 May 2013, halfway between its last Presidential elections in 2010 and its next in 2016, the Philippines held its midterm elections where 18,000 positions were at stake. Smartmatic again provided technology and services to Comelec. The same 82,000 voting machines used in 2010 were deployed.
Election watchdog National Citizens Movement for Free Elections (Namfrel), which is one of the Comelec's official citizen's arm for the midterm elections, assessed the polls as "generally peaceful and organized." The Philippine National Police considered the 2013 the most peaceful elections in the history of the country. The US Embassy commended the Filipinos for the elections.
2016 Philippine presidential election
For the country's third national automated elections in the 2016 Philippine presidential election, which was held on May 9, 2016, a total of 92,509 vote-counting machines (VCMs) were deployed across an archipelago comprising 7,107 islands, while 5,500 VCMs served as back-up voting machines. For Overseas Absentee Voting Act (OAV), 130 VCMs were deployed in 18 countries.
There were major challenges faced prior to elections, chief of which was the late-stage Supreme Court ruling that required each voting machine to print a receipt. The ruling was handed down on March 17, 2016, giving Comelec and Smartmatic less than two months to prepare. By election night, about 86% of election data had already been transmitted, prompting winners in local municipalities to be proclaimed in real-time. Also by election night, Filipinos already knew who the winning president was, leading other candidates to concede within 24 hours. This concession of several candidates signified acceptance of results that validated the credibility of the automation system. Over 20,000 candidates conceded.
Rodrigo Duterte became the 16th President of the Philippines, succeeding Benigno Aquino III, while the 16th Vice-President Leni Robredo succeeded Jejomar Binay. Legislators elected in the 2016 elections joined the senators elected in the 2013 midterm elections to constitute the 16th Congress of the Philippines.
2019 Philippine Senate election
During the 2019 Philippine Senate election, Smartmatic was minimally involved in the election and was only available for technical assistance. The majority of electoral functions were performed by Comelec after it purchased Smartmatic's voting machines following the 2016 elections.
Singapore
From the 2020 general election onwards, Smartmatic was used for the electronic registration of voters at polling stations on polling day, replacing the need for election officials to manually strike out each voter's particulars from a hardcopy register of electors when a voter has voted.
United States
2016 Utah republican presidential primaries
In the 2016 Utah Republican caucus, where Utah Republicans voted to choose the party's nominee for president in the 2016 US Presidential election, the voters had the opportunity to vote using traditional methods or to vote online. For online voting, the Utah Republican Party used an internet voting system developed by the Smartmatic-Cybernetica Internet Voting Centre of Excellence, based in Estonia.
Despite warnings from security experts, Utah GOP officials billed the online voting system, for which the state paid $150,000. Multiple issues occurred with the system, with voters receiving error messages and even being blocked from voting. Smartmatic received thousands of calls from Utah voters surrounding issues with the process. The Washington Post states that "the concern seems to be less with the technology and more with the security of the devices people use to vote".
According to Joe Kiniry, the lead researcher of Galois, a technology research firm:Responses from voters, who participated in the caucus from more than 45 different countries, was positive. 94% approved of the experience, 97% responded that they were interested in participating in future online elections and 82% thought online voting should be used nationally.
Los Angeles county
Los Angeles County, which has about 5 million registered voters, began searching for a new electoral system in 2009, after the county determined that available systems at the time were not suitable. The Voting System Assessment Project (VSAP) was initiated to establish a publicly owned voting system, and to provide research of electoral methods for other voting jurisdictions interested in replicating the process.
In 2017, Los Angeles County signed a $282 million contract with Smartmatic to create an election system to be used for future elections, and became the first publicly-owned voting system in the United States. The system will be used for the first time during the 2020 California Democratic primary. Both software and hardware were developed in the United States by Smartmatic, while ownership of all products and intellectual properties were then given to Los Angeles County. The machines developed incorporate an interactive ballot that is printed by each voter to validate results, and then deposited back into voting machines. According to VSAP, interest in the voting system was expressed by other districts in the United States and internationally.
Venezuela
Smartmatic was the main technology supplier for fourteen Venezuelan national elections. In March 2018, Smartmatic ceased operations in Venezuela.
2004 Venezuela recall referendum
Venezuela's previously existing laws that were established before Hugo Chávez's Bolivarian Revolution stated that automated voting was required in Venezuela, with United States firm Election Systems & Software and Spanish company Indra Sistemas already being used in the country. In response to a bid process for the 2004 Venezuela recall election initiated by the National Electoral Council (CNE), Venezuela's electoral authority, the SBC Consortium was formed in the third quarter of 2003. The SBC Consortium comprised Smartmatic, Bizta, and telecommunications organization CANTV. For the 2004 elections, the SBC Consortium competed with Indra and other companies, ultimately winning the contract worth $128 million. The voting machines used previously, furnished by Indra Sistemas, were mere ballot scanners having only basic functions for storing cast votes until the end of Election Day, with no feedback whatsoever for the voter. Smartmatic had re-engineered Olivetti lottery machines used in Italy, essentially state-of-the-art PCs, each providing a colour touchscreen, a thermal printer, and advanced programming handling the voting process and printing of VVPAT receipts for the voter to check, and also tally reports and data transmission at voting session closure, with special emphasis on security. Other than the touchscreen (operating under program control), there was no input device or communications in force during Voting Day. Smartmatic's role in the election was to oversee electoral workers' training and the preparation, testing and deployment of voting machines. Bizta sent manual votes in remote areas to software centers and CANTV provided logistical assistance.
2012 Venezuelan presidential election
In October 2012, Smartmatic participated in the elections of 3 countries. In Venezuela, October 7, for the first time in the world, national elections were carried out with biometric voter authentication to activate the voting machines. Out of 18,903,143 citizens registered to vote in the presidential elections, voter turnout was around 81%, both record figures in Venezuelan electoral history.
2017 Venezuelan Constituent Assembly election
Smartmatic stated that the results of the 2017 Venezuelan Constituent Assembly election were manipulated. On August 2 of 2017, Smartmatic CEO Antonio Mugica stated on a press briefing in London "We know, without a doubt, that the result of the recent elections for a National Constituent Assembly were manipulated," and added "We estimate that the difference between actual and announced participation by the authorities is at least one million votes." The company said that the turnout was off by at least one million votes. Reuters also reported that according to internal CNE documents leaked to the agency, only 3,720,465 votes were cast thirty minutes before polls were expected to close, though polls were open for an additional hour. The company later left Venezuela in 2018.
Other endeavors
Automation
In 2011, The District of Cartagena in Colombia selected Smartmatic as technology provider for the new Financial Administration Service of the Integrated Mass Transit System (Transcaribe), which operates based on a highly automated fare collection and fleet control system.
Identification
Smartmatic was chosen to develop Mexico's new ID card in 2009, with the process involving the biometric registration of over 100 million people. Bolivia also used Smartmatic's biometric capabilities with the registration of 5.2 million people for electoral systems.
Security
Smartmatic launched its banking security endeavor in 2002 utilizing its Smartnet system, which it described as "one of the earliest platforms to enable the 'Internet of Things'". The company began providing security technology and surveillance equipment for Santander-Serfin Bank in Mexico at their bank branches in 2004. Since 2006, the Office of the Mayor of Metropolitan Caracas in Venezuela began the installation of the integrated public security system that helps authorities to provide immediate response to citizens whose safety has been jeopardized.
Controversy
Venezuela
2004 elections
After the presidential recall referendum of 2004 in Venezuela, some controversy was raised about the use of electronic voting (SAES voting machines) in that country. Studies following the 2004 Venezuela recall elections found that Smartmatic's network was "bi-directional" with data being able to be transferred both ways between Smartmatic devices and the telecommunications company CANTV, with alleged irregularities found between the Smartmatic and Venezuela's National Electoral Council election results. Other independent election monitors claimed fraud and submitted appeals, and statistical evaluations including a peer-reviewed article in 2006 and a special section of 6-peer-reviewed article in 2011 concluded that it was likely that electronic election fraud had been committed. The analysis of communication patterns allowed for the hypothesis that the data in the machines could have been changed remotely, while another of the articles suggested that the outcome could have been altered from about 60% against the sitting president, to 58% for the sitting president. None of such hypotheses was ever confirmed by facts.
Representatives from international election observation agencies attested that the election conducted using SAES was at that time fair, accurate and compliant with the accepted timing and reliability criteria. These agencies included the Carter Center, the Organization of American States (OAS), and the European Union (EU). Jennifer McCoy, Carter Center Director for the Americas, stated that several audits validated the accuracy of the machines. “We found a variation of only 0.1% between the paper receipts and the electronic results. This could be explained by voters putting the slips in the wrong ballot box”.
Dr. Tulio Alvarez, who had performed an independent observation of the election which detailed the networks between CNE and Smartmatic, described the Carter Center's findings as "insufficient, superficial and irresponsible".
2005 elections
Prior to the 2005 Venezuela parliamentary election, one technician could work around "the machine's allegedly random storage protocols" and remove voting secrecy. Since the voting systems were Windows-based and only randomized data, the technician was able to download a simple software that could place Windows files in order. Following this revelation, voter turnout dropped substantially with only 25% of registered Venezuelans voting and opposition parties withdrawing from the election. This resulted in Hugo Chávez's party, as well as his allied parties, to control 100% of Venezuela's National Assembly.
Alleged affiliations with government
Affiliations with Bolivarian government politicians raised suspicions, with instances of an interior vice minister, Morris Loyo Arnáez, being hired to lobby for Smartmatic contracts and with the company paying for the National Electoral Council (CNE) president Jorge Rodríguez and his sister Delcy Rodríguez to stay at the Boca Raton Resort & Club in Boca Raton, Florida. Vice Minister Loyo was paid $1.5 million by Smartmatic as a "sales commission" and his continual payments with the company eventually doubled.
A lawyer who had worked with Rodríguez, Moisés Maiónica, was allegedly employed by Smartmatic in order to provide legal and financial assistance to help with its selection for its 2004 elections. Years after the election in December 2008, Maiónica pled guilty in the United States District Court for attempting to cover up Maletinazo scandal, an incident where Hugo Chávez attempted to finance Cristina Kirchner's 2007 Argentine Presidential Election campaign to influence Argentina's presidential election, with Maiónica stating that he was working for Venezuela's spy agency, the National Directorate of Intelligence and Prevention Services. Smartmatic has denied ever having a relationship with Maiónica.
Alleged obfuscation of Venezuelan ownership
Smartmatic's headquarters moved to London in 2012, while it also has offices and R&D labs in the United States, Brazil, Venezuela, Barbados, Panama, the United Kingdom, the Netherlands, the Philippines, Estonia, and Taiwan.
The Wall Street Journal wrote that "Smartmatic scrapped a simple corporate structure" of being based in Boca Raton "for a far more complex arrangement" of being located in multiple locations following the Sequoia incident. Though Smartmatic has made differing statements saying that they were either American or Dutch based, the United States Department of State stated that its Venezuelan owners "remain hidden behind a web of holding companies in the Netherlands and Barbados". The New York Times states that "the role of the young Venezuelan engineers who founded Smartmatic has become less visible" and that its organization is "an elaborate web of offshore companies and foreign trusts", while BBC News states that though Smartmatic says the company was founded in the United States, "its roots are firmly anchored in (Venezuela)". Multiple sources simply state that Smartmatic is a Venezuelan company. Smartmatic maintains that the holding companies in multiple countries are used for "tax efficiency".
United States
At local elections in 2006 in Chicago and Cook County, allegations arose that Smartmatic might have ties to the Venezuelan government. These allegations were picked up again in 2020 by a legal representative of President Donald Trump, who accused it of working with the socialist government of Venezuela in order to derail President Trump's reelection. See also here under Venezuela.
2006 local elections
Following the 2004 Venezuelan recall election, Smartmatic acquired Sequoia Voting Systems, one of the leading US companies in automated voting products from the British company De La Rue in 2005. Following this acquisition, U.S. Representative Carolyn B. Maloney requested an investigation to determine whether the Committee on Foreign Investment in the United States (CFIUS) had followed correct processes to green-light sale of Sequoia to Smartmatic, which was described as having "possible ties to the Venezuelan government". The request was made after March 2006 following issues in Chicago and Cook County, where a percentage of the machines involved were manufactured by Sequoia, and Sequoia provided technical assistance, some by a number of Venezuelan nationals flown in for the event. According to Sequoia, the tabulation problems were due to human error, as a post-election check identified only three mechanical problems in 1,000 machines checked while election officials blamed poor training. Other issues were suspected to be related to software errors linked to the voting system's central computer.
Following the request, Smartmatic and Sequoia submitted a request to be reviewed by the CFIUS while also denying links to the Venezuelan government. The company disclosed that it was mainly owned by four Venezuelans–Antonio Mugica (78.8%), Roger Piñate (8.47%), Jorge Massa Dustou (5.97%), and Alfredo José Anzola (3.87%)–with a small amount of shares owned by employees (2.89%). Smartmatic subsequently sold Sequoia and later withdrew from Cook County in December 2006.
2020 presidential election
Smartmatic was the subject of accusations of fraud in the aftermath of the 2020 United States presidential election, notably promoted by the personal attorney to President Donald Trump, Rudy Giuliani, who asserted the company was founded by the former socialist Venezuelan leader Hugo Chávez and that it owned and provided software to a related company, Dominion Voting Systems. Giuliani asserted Dominion is a "radical-left" company with connections to Antifa that sent American voting data to foreign Smartmatic locations. Others falsely asserted that Smartmatic was owned by George Soros and that the company owned Dominion. Smartmatic voting machines were not used in any of the battleground states that determined Joe Biden's election victory.
These accusations against Smartmatic were made on conservative television outlets, and the company sent them a letter demanding a retraction and threatening legal action. Fox Business host Lou Dobbs had been outspoken during his program about the accusations; on December 18 his program aired a video segment refuting the accusations, consisting of an interview with Edward Perez, an election technology expert at the Open Source Election Technology Institute, which fact checked allegations regarding the company (including those that had been made by Fox). Dobbs himself did not comment. Fox News hosts Jeanine Pirro and Maria Bartiromo had also been outspoken about the allegations, and both their programs aired the same video segment over the following two days. On December 21, Newsmax similarly complied with the request and presented an on-air clarification.
New York Times media journalist Ben Smith noted the possibility that a major defamation lawsuit could be filed against the outlets, drawing parallels with a 2012 lawsuit filed against ABC News by Beef Products Inc. over reports on "pink slime" that the company considered disparaging.
On February 4, 2021, Smartmatic sued Fox Corporation, Fox News Network, and its anchors Lou Dobbs, Maria Bartiromo, and Jeanine Pirro for $2.7 billion in the New York State Supreme Court as well as Rudy Giuliani and Sidney Powell, who spread baseless claims of election fraud on Fox. The 276-page complaint alleges that Fox, its anchors, Giuliani, and Powell spread a "conspiracy to defame and disparage Smartmatic and its election technology and software" by making new business opportunities increasingly scarce. Since February 5, Dobbs has been replaced with other anchors at Fox Business. On August 17, 2021, a New York State Supreme Court judge questioned lawyers for Powell, Giuliani, and Fox News about the claims made about Smartmatic. On November 3, 2021, Smartmatic sued Newsmax and One America News Network for promoting false claims of election fraud. On January 18, 2022, Smartmatic sued Mike Lindell and My Pillow for defamation, accusing Lindell of defaming the company to sell pillows.
Philippines
Smartmatic has been criticized by various entities for its motives and handling of elections in the Philippines. In opinion polls, voters have approved of Smartmatic's automated system used by the Commission on Elections (COMELEC), with 84% of respondents stating that they had "big trust" in the automated process according to a June 2019 Pulse Asia Research poll.
The Manila Times has stated that Smartmatic's system was unreliable, glitchy and vulnerable to tampering. After the newspaper reported that Smartmatic had been funneling voter information through "unofficial servers", The Manila Times ultimately called on officials from the country's electoral body, COMELEC, to resign. William Yu of the Parish Pastoral Council for Responsible Voting, an election NGO, stated that such servers perform "many other activities before the elections" and that it "does not necessarily, automatically mean that data has been transmitted", though he requested that COMELEC and Smartmatic provide an explanation.
In early 2017, The Manila Times reported that Smartmatic machines were equipped with SD cards where voter entries are recorded, citing Glenn Chong, a former congressman of the NGO Tanggulang Demokrasya (TANDEM) stating that "at least one SD card was tampered with", allegedly showing that Smartmatic's system was "very much open to hijacking or sabotage". A reviewer of the Philippine Linux Users' Group stated that hacking into Smartmatic's system is "very difficult for outsiders" and that "it's not as difficult to hack into the system if you're a COMELEC or a group of COMELEC or Smartmatic personnel", expressing importance of monitoring by COMELEC and asking the public to have good faith in the electoral body.
The IBON Foundation, a non-profit research organization based in the Philippines also criticized Smartmatic's system, stating in 2016 that "Why Smartmatic keeps on winning COMELEC contracts boggles the mind especially considering the numerous and major malfunctions by the machines and services that Smartmatic provided in the past two elections" and that there were "allegations of rigged bidding to favor Smartmatic such as designing contracts where only Smartmatic can qualify or omitting requirements that will otherwise disqualify Smartmatic".
2010 elections
Prior to the elections, Filipino-Americans called on President Barack Obama to investigate the background of Smartmatic prior to the elections due to its links to the Venezuelan government. Smartmatic described these actions as "trying to rehash a story based on market share". Following allegations of fraud, some employees of Smartmatic had their passports temporarily held. At a fraud inquiry on May 20, 2010, Heider Garcia of Smartmatic was questioned on the transparency and what he called "unforeseen" occurrences during the election process, with Philippine official Teodoro Locsin Jr. – an automated poll advocate – sharply rebuking Garcia. On June 29, 2010, the Philippine Computer Society (PCS) filed a complaint with the country's Ombudsman against 17 officials of the Commission on Elections and the Smartmatic-TIM Corp. for alleged "incompetence", graft and unethical conduct.
2016 elections
Days after the May 2016 elections, Bongbong Marcos, son of late President Ferdinand Marcos, alleged that Smartmatic had tampered with the votes which cost him being elected Vice President of the Philippines and criminal proceedings were filed by the COMELEC against COMELEC personnel as well as Smartmatic employees, with Election Commissioner Rowena Guanzon stating that Smartmatic had violated protocols. After a Smartmatic employee fled the country, Bongbong Marcos accused the COMELEC for his "escape", though two other Smartmatic personnel, one from Venezuela and the other from Israel, were present for criminal proceedings. In July 2016, it was reported that Smartmatic funneled votes through "unofficial servers". In an October 2016 editorial, The Manila Times called on all members of COMELEC to resign due to the "innumerable controversies since its adoption of the Smartmatic-based Automated Election System".
On June 7, 2017, the Department of Justice (DOJ) indicted "several Smartmatic and COMELEC personnel for changing the script in the election transparency server on election night during the May 2016 national and local polls". Those charged with the tampering include Marlon Garcia, the head of the Smartmatic's Technical Support Team, as well as tow other Smartmatic employees, Neil Baniqued and Mauricio Herrera, and COMELEC IT employeesl Rouie Peñalba, Nelson Herrera, and Frances Mae Gonzales. The six were charged with "illegal access, data interference, and system interference" under the Cybercrime Prevention Act.
In August 2017, it was revealed that COMELEC Chairman Andres Bautista was allegedly paid commissions by Divina Law while serving as chairman "for assisting the law firm clients with the COMELEC". Divina Law, a firm that provides legal advice to Smartmatic. Bautista admitted that he obtained "referral fees", but denied that it was due to his position in COMELEC. According to House Deputy Minority Leader Harry Roque, the incident is "a very clear case of bribery" by Smartmatic.
See also
2008 Autonomous Region in Muslim Mindanao general election
Biometrics
Civil registration
DRE voting machine
Electronic Voting
References
2000 establishments in Venezuela
Election technology companies
Electronic voting
Technology companies established in 2000
Networking hardware |
28128957 | https://en.wikipedia.org/wiki/Gleducar | Gleducar | Gleducar is a free educational project emerged in Argentina in 2002. It is also an important NGO (Civil Association) from Argentina in the field of education and technology.
Gleducar is an independent community composed of teachers, students and education activists linked by a common interest in collective work, cooperative knowledge building and free distribution of knowledge.
The project works around different themes, such as Open Education, Open Access, Free Knowledge, Popular Education, peer education, collaborative learning and Free Technologies, and promotes the use of Free Software in schools as a pedagogical and technical system, with the objective of changing the paradigm of production, construction and dissemination of educational content.
It consists of an independent educational community incorporated as a self-organized NGO (Civil Association) which meets the interests and objectives of the community. Gleducar Project is the result of the sum of their community and the NGO that supports it.
Project objectives
Raise awareness of the educational community about the importance of computers as an educational support tool and as a facilitator in Cooperative Knowledge Building.
Create a community for sharing ideas, experiences and projects related to the topic of technological integration and use of free software among the various educational institutions, students, teachers and community in Argentina.
To promote the educational use of Internet in schools as a tool for creative production and research, and as a communication mean for ideas and knowledge.
Inform to general community and educational community particularly about the benefits offered by GNU / Linux and Free Software as an alternative to other systems with similar characteristics.
Provide advice to schools and teachers in the area of technological integration and development of real integrating projects.
Attract computer specialists to the educational community and vice versa, to accomplish the freedom of speech and communicative exchange, transferring the love of learning and challenges that promotes a true shared learning.
History and development
Gleducar Project was born around 2002 in the city of Cañada Rosquín, Santa Fé, Argentina. It was established as a Civil Association in 2004. Today is one of the most important educational projects in Argentina, concerning free education.
Gleducar Project was declared of National Interest by the Senate of Argentina in 2005.
It is recognized worldwide as a benchmark of free education in Latin America. In 2007 it received an Honorable Mention in the International Competition "Chris Nicol" of the Free Software Association for Progressive Communications (APC) for its outstanding work for free and sustainable education.
Gleducar's work has been an inspiration and a guide for the emergence of other similar projects and communities on the continent.
Actions
Gleducar carries out projects to improve computer labs, settled on dozens of migrations to Free Software conducted in schools in Argentina, that have obtained high-quality resources for teaching.
Gleducar community also develops free educational materials in conjunction with other nonprofit organizations and working with the Argentine State. It has a large repository of free educational resources and a wide range of educational free software tools.
Gleducar develops pedagogical and technical skills on collaborative knowledge production, free education and free software.
It regularly organizes two annual events CoLCIT (Congress of Free Culture for Tertiary Institutions) and Epuel (Meeting for a Free Education). It also participates in conferences and local, regional and international meetings on the subject.
It has worked with the Argentinian National Ministry of Education in the development of free educational materials. However, the NGOs does not currently receive financial contributions from any government entity.
Gleducar has also carried out numerous actions in conjunction with the Fundación Vía Libre. It has also developed open source projects and has carried out joint activities with other organizations such as Fairness Foundation, Caritas, CTERA (Confederation of Education Workers of Argentina), AMSAFE, SoLAr, FM La Tribu, CaFeLUG, LUGRo, Tuquito GNU/Linux, Wikimedia Argentina, among others. The project also hosts and promotes other initiatives such as Argenclic, Free University, among others.
Internationally, the project has participated in and acceded to various joint statements on access to knowledge and free / open education as the "Santo Domingo Declaration", the Cape Town Open Education Declaration and the "Charter for innovation, creativity and access to knowledge".
It also works on issues related to the free movement of knowledge, participating alongside other civic organizations on initiatives and campaigns warning about the threat of Intellectual Property regimes for the common cultural heritage, and on mobilizations against policies that victimizes the net neutrality and the freedom of expression on the Internet, on various occasions.
Resources provided by Gleducar
Gleducar provides multiple resources for teachers in particular and to all who wish to undertake free projects related to the C3. The space is provided at no economic cost on the condition of using a free license that -minimally- allows to share and lead free materials generated.
Gleducar community has resources as: an educational wiki which already contains more than 2,000 free educational resources (more than 5000 pages in total) and over 5,600 registered users; a virtual campus, a server webquest, multiple mailing lists with over 570 members and even a personal sites aggregator to centralize and register changes in some members's blogs of the Gleducar community
The technical infrastructure used by the NGO is provided by USLA (Free Software Users Argentina).
External links
Official web site
Official Website - English Google Translation
Photo Album
References
Educational materials
Open content
Copyleft
Educational organisations based in Argentina
Free and open-source software organizations
Free software projects
Organizations established in 2001 |
21347315 | https://en.wikipedia.org/wiki/Linux%20kernel | Linux kernel | The Linux kernel is a mostly free and open-source, monolithic, modular, multitasking, Unix-like operating system kernel. It was originally authored in 1991 by Linus Torvalds for his i386-based PC, and it was soon adopted as the kernel for the GNU operating system, which was written to be a free (libre) replacement for UNIX.
Linux as a whole is released under the GNU General Public License version 2 only, but it contains files under other compatible licenses. However, Linux begun including proprietary binary blobs in its source tree and main distribution in 1996. This led to other projects starting work to remove the proprietary blobs in order to produce a 100% libre kernel, which eventually led to the Linux-libre project being founded.
Since the late 1990s, it has been included as part of a large number of operating system distributions, many of which are commonly also called Linux. However, there is a controversy surrounding the naming of such systems; some people, including Richard Stallman, argue calling such systems "Linux" is erroneous because the operating system is actually mostly GNU, with the Linux kernel being one component added later on in 1992, 9 years after the initiation of the GNU project in 1983, hence the name "GNU+Linux" or "GNU/Linux" should be used instead.
Linux is deployed on a wide variety of computing systems, such as embedded devices, mobile devices (including its use in the Android operating system), personal computers, servers, mainframes, and supercomputers. It can be tailored for specific architectures and for several usage scenarios using a family of simple commands (that is, without the need of manually editing its source code before compilation); privileged users can also fine-tune kernel parameters at runtime. Most of the Linux kernel code is written using the GNU extensions of GCC to the standard C programming language and with the use of architecture specific instructions (ISA). This produces a highly optimized executable (vmlinux) with respect to utilization of memory space and task execution times.
Day-to-day development discussions take place on the Linux kernel mailing list (LKML). Changes are tracked using the version control system git, which was originally authored by Torvalds as a free software replacement for BitKeeper.
History
In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, Finland, started working on some simple ideas for an operating system inspired by UNIX, for a personal computer. He started with a task switcher in Intel 80386 assembly language and a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet:
On 17 September 1991, Torvalds prepared version 0.01 of Linux and put on the "ftp.funet.fi" – FTP server of the Finnish University and Research Network (FUNET). It was not even executable since its code still needed Minix for compilation and play.
On 5 October 1991, Torvalds announced the first "official" version of Linux, version 0.02. At this point, Linux was able to run Bash, GCC, and some other GNU utilities:
After that, despite the limited functionality of the early versions, Linux rapidly gained developers and users. Many people contributed code to the project, including some developers from the MINIX community. At the time, the GNU Project had created many of the components required for its free UNIX replacement, the GNU operating system, but its own kernel, GNU Hurd, was incomplete. For this reason it soon adopted Linux kernel, too. The Berkeley Software Distribution had not yet freed itself from legal encumbrances and was not competing in the space for a free OS kernel.
Torvalds assigned version 0 to the kernel to indicate that it was mainly for testing and not intended for productive use. Version 0.11, released in December 1991, was the first self-hosted Linux, for it could be compiled by a computer running the same kernel.
When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 (GPLv2) over his previous self-drafted license, which had not permitted commercial redistribution. In contrast to Unix, all source files of Linux are freely available, including device drivers. The initial success of Linux was driven by programmers and testers across the world. With the support of the POSIX APIs, through the libC that, whether needed, acts as an entry point to the kernel address space, Linux could run software and applications that had been developed for Unix.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, and Torvalds. The Tanenbaum–Torvalds debate started in 1992 on the Usenet group comp.os.minix as a general discussion about kernel architectures.
Linux version 0.95 was the first to be capable of running the X Window System. In March 1994, Linux 1.0.0 was released with 176,250 lines of code. It was the first version suitable for use in production environments.
It started a versioning system for the kernel with three or four numbers separated by dots where the first represented the major release, the second was the minor release, and the third was the revision. At that time odd-numbered minor releases were for development and tests, whilst even numbered minor releases were for production. The optional fourth digit indicated a set of patches to a revision. Development releases were indicated with -rc ("release candidate") suffix.
The current version numbering is slightly different from the above. The even vs. odd numbering has been dropped and a specific major version is now indicated by the first two numbers, taken as a whole. While the time-frame is open for the development of the next major, the -rcN suffix is used to identify the n'th release candidate for the next version. For example, the release of the version 4.16 was preceded by seven 4.16-rcN (from -rc1 to -rc7). Once a stable release is made, its maintenance is passed off to the “stable team". Occasional updates to stable releases are identified by a three numbering scheme (e.g., 4.13.1, 4.13.2, ..., 4.13.16).
After version 1.3 of the kernel, Torvalds decided that Linux had evolved enough to warrant a new major number, so he released version 2.0.0 in June 1996. The series included 41 releases. The major feature of 2.0 was support for symmetric multiprocessing (SMP) and support for more types of processors.
Starting with version 2.0, Linux is configurable for selecting specific hardware targets and for enabling architecture specific features and optimizations. The make *config family of commands of kbuild are used to enable and configure thousands of options for building ad hoc kernel executables (vmlinux) and loadable modules.
Version 2.2, released on 20 January 1999, improved locking granularity and SMP management, added m68k, PowerPC, Sparc64, Alpha, and other 64-bit platforms support. Furthermore, it added new file systems including Microsoft's NTFS read-only capability. In 1999, IBM published its patches to the Linux 2.2.13 code for the support of the S/390 architecture.
Version 2.4.0, released on 4 January 2001, contained support for ISA Plug and Play, USB, and PC Cards. Linux 2.4 added support for the Pentium 4 and Itanium (the latter introduced the ia64 ISA that was jointly developed by Intel and Hewlett-Packard to supersede the older PA-RISC), and for the newer 64-bit MIPS processor. Development for 2.4.x changed a bit in that more features were made available throughout the duration of the series, including support for Bluetooth, Logical Volume Manager (LVM) version 1, RAID support, InterMezzo and ext3 file systems.
Version 2.6.0 was released on 17 December 2003. The development for 2.6.x changed further towards including new features throughout the duration of the series. Among the changes that have been made in the 2.6 series are: integration of µClinux into the mainline kernel sources, PAE support, support for several new lines of CPUs, integration of Advanced Linux Sound Architecture (ALSA) into the mainline kernel sources, support for up to 232 users (up from 216), support for up to 229 process IDs (64-bit only, 32-bit arches still limited to 215), substantially increased the number of device types and the number of devices of each type, improved 64-bit support, support for file systems which support file sizes of up to 16 terabytes, in-kernel preemption, support for the Native POSIX Thread Library (NPTL), User-mode Linux integration into the mainline kernel sources, SELinux integration into the mainline kernel sources, InfiniBand support, and considerably more.
Also notable are the addition of a wide selection of file systems starting with the 2.6.x releases: now the kernel supports a large number of file systems, some that have been designed for Linux, like ext3, ext4, FUSE, Btrfs, and others that are native of other operating systems like JFS, XFS, Minix, Xenix, Irix, Solaris, System V, Windows and MS-DOS.
In 2005 the stable team was formed as a response to the lack of a kernel tree where people could work on bug fixes, and it would keep updating stable versions. In February 2008 the linux-next tree was created to serve as a place where patches aimed to be merged during the next development cycle gathered. Several subsystem maintainers also adopted the suffix -next for trees containing code which they mean to submit for inclusion in the next release cycle. , the in-development version of Linux is held in an unstable branch named linux-next.
Linux used to be maintained without the help of an automated source code management system until, in 2002, development switched to BitKeeper. It was freely available for Linux developers but it was not free software. In 2005, because of efforts to reverse-engineer it, the company which owned the software revoked the support of the Linux community. In response, Torvalds and others wrote Git. The new system was written within weeks, and in two months the first official kernel made using it was released.
Details on the history of the 2.6 kernel series can be found in the ChangeLog files on the 2.6 kernel series source code release area of kernel.org.
The 20th anniversary of Linux was celebrated by Torvalds in July 2011 with the release of the 3.0.0 kernel version. As 2.6 has been the version number for 8 years, a new uname26 personality that reports 3.x as 2.6.40+x had to be added to the kernel so that old programs would work.
Version 3.0 was released on 22 July 2011. On 30 May 2011, Torvalds announced that the big change was "NOTHING. Absolutely nothing." and asked, "...let's make sure we really make the next release not just an all new shiny number, but a good kernel too." After the expected 6–7 weeks of the development process, it would be released near the 20th anniversary of Linux.
On 11 December 2012, Torvalds decided to reduce kernel complexity by removing support for i386 processors, making the 3.7 kernel series the last one still supporting the original processor. The same series unified support for the ARM processor.
Version 3.11, released on 2 September 2013, adds many new features such as new flag for to reduce temporary file vulnerabilities, experimental AMD Radeon dynamic power management, low-latency network polling, and zswap (compressed swap cache).
The numbering change from 2.6.39 to 3.0, and from 3.19 to 4.0, involved no meaningful technical differentiation. The major version number was increased to avoid large minor numbers. Stable 3.x.y kernels were released until 3.19 in February 2015.
In April 2015, Torvalds released kernel version 4.0. By February 2015, Linux had received contributions from nearly 12,000 programmers from more than 1,200 companies, including some of the world's largest software and hardware vendors. Version 4.1 of Linux, released in June 2015, contains over 19.5 million lines of code contributed by almost 14,000 programmers.
A total of 1,991 developers, of whom 334 are first collaborators, added more than 553,000 lines of code to version 5.8, breaking the record previously held by version 4.9.
According to the Stack Overflow’s annual Developer Survey of 2019, more than the 53% of all respondents have developed software for Linux OS and about 27% for Android, although only about 25% develop with Linux-based operating systems.
Most websites run on Linux-based operating systems, and all of the world's 500 most powerful supercomputers use some kind of OS based on Linux.
Linux distributions bundle the kernel with system software (e.g., the GNU C Library, systemd, and others Unix utilities and daemons) and a wide selection of application software, but their usage share in desktops is low in comparison to other operating systems.
Android, which accounts for the majority of the installed base of all operating systems for mobile devices, is responsible for the rising usage of the Linux kernel, together with its wide use in a large variety of embedded devices.
Architecture and features
Linux is a monolithic kernel with a modular design (e.g., it can insert and remove loadable kernel modules at runtime), supporting most features once only available in closed source kernels of non-free operating systems. The rest of the article makes use of the UNIX and Unix-like operating systems convention on the official manual pages. The numbers that follow the name of commands, interfaces, and other features, have the purpose of specifying the section (i.e., the type of the OS' component or feature) they belong to (e.g., refers to a system call, while refers to a userspace library wrapper). The following list and the subsequent sections describe a non-comprehensive overview of Linux architectural design and of some of its noteworthy features.
Concurrent computing and (with the availability of enough CPU cores for tasks that are ready to run) even true parallel execution of many processes at once (each of them having one or more threads of execution) on SMP and NUMA architectures.
Selection and configuration of hundreds of kernel features and drivers (using one of the family of commands, before running compilation), modification of kernel parameters before booting (usually by inserting instructions into the lines of the GRUB2 menu), and fine tuning of kernel behavior at run-time (using the interface to ).
Configuration (again using the commands) and run-time modifications of the policies (via , , and the family of syscalls) of the task schedulers that allow preemptive multitasking (both in user mode and, since the 2.6 series, in kernel mode); the Completely Fair Scheduler (CFS) is the default scheduler of Linux since 2007 and it uses a red-black tree which can search, insert and delete process information (task struct) with O(log n) time complexity, where n is the number of runnable tasks.
Advanced memory management with paged virtual memory.
Inter-process communications and synchronization mechanism.
A virtual filesystem on top of several concrete filesystems (ext4, Btrfs, XFS, JFS, FAT32, and many more).
Configurable I/O schedulers, syscall that manipulates the underlying device parameters of special files (it is a non standard system call, since arguments, returns, and semantics depends on the device driver in question), support for POSIX asynchronous I/O (however, because they scale poorly with multithreaded applications, a family of Linux specific I/O system calls () had to be created for the management of asynchronous I/O contexts suitable for concurrently processing).
OS-level virtualization (with Linux-VServer), paravirtualization and hardware-assisted virtualization (with KVM or Xen, and using QEMU for hardware emulation); On the Xen hypervisor, the Linux kernel provides support to build Linux distributions (such as openSuSE Leap and many others) that work as Dom0, that are virtual machine host servers that provide the management environment for the user's virtual machines (DomU).
I/O Virtualization with VFIO and SR-IOV. Virtual Function I/O (VFIO) exposes direct device access to user space in a secure memory (IOMMU) protected environment. With VFIO, a VM Guest can directly access hardware devices on the VM Host Server. This technique improves performance, if compared both to Full virtualization and Paravirtualization. However, with VFIO, devices cannot be shared with multiple VM guests. Single Root I/O Virtualization (SR-IOV) combines the performance gains of VFIO and the ability to share a device with several VM Guests (but it requires special hardware that must be capable to appear to two or more VM guests as different devices).
Security mechanisms for discretionary and mandatory access control (SELinux, AppArmor, POSIX ACLs, and others).
Several types of layered communication protocols (including the Internet protocol suite).
Asymmetric multiprocessing via the RPMsg subsystem.
Most Device drivers and kernel extensions run in kernel space (ring 0 in many CPU architectures), with full access to the hardware. Some exceptions run in user space; notable examples are filesystems based on FUSE/CUSE, and parts of UIO. Furthermore, the X Window System and Wayland, the windowing system and display server protocols that most people use with Linux, do not run within the kernel. Differently, the actual interfacing with GPUs of graphics cards is an in-kernel subsystem called Direct Rendering Manager (DRM).
Unlike standard monolithic kernels, device drivers are easily configured as modules, and loaded or unloaded while the system is running and can also be pre-empted under certain conditions in order to handle hardware interrupts correctly and to better support symmetric multiprocessing. By choice, Linux has no stable device driver application binary interface.
Linux typically makes use of memory protection and virtual memory and can also handle non-uniform memory access, however the project has absorbed μClinux which also makes it possible to run Linux on microcontrollers without virtual memory.
The hardware is represented in the file hierarchy. User applications interact with device drivers via entries in the or directories. Processes information as well are mapped to the file system through the directory.
Interfaces
Linux is a clone of UNIX, and aims towards POSIX and Single UNIX Specification compliance. The kernel also provides system calls and other interfaces that are Linux-specific. In order to be included in the official kernel, the code must comply with a set of licensing rules.
The Linux Application binary interface (ABI) between the kernel and the user space has four degrees of stability (stable, testing, obsolete, removed); however, the system calls are expected to never change in order to not break the userspace programs that rely on them.
Loadable kernel modules (LKMs), by design, cannot rely on a stable ABI. Therefore they must always be recompiled whenever a new kernel executable is installed in a system, otherwise they will not be loaded. In-tree drivers that are configured to become an integral part of the kernel executable (vmlinux) are statically linked by the building process.
There is also no guarantee of stability of source-level in-kernel API and, because of this, device drivers code, as well as the code of any other kernel subsystem, must be kept updated with kernel evolution. Any developer who makes an API change is required to fix any code that breaks as the result of their change.
Kernel-to-userspace API
The set of the Linux kernel API that regards the interfaces exposed to user applications is fundamentally composed of UNIX and Linux-specific system calls. A system call is an entry point into the Linux kernel. For example, among the Linux-specific ones there is the family of the system calls. Most extensions must be enabled by defining the _GNU_SOURCE macro in a header file or when the user-land code is being compiled.
System calls can only be invoked by using assembly instructions which enable the transition from unprivileged user space to privileged kernel space in ring 0. For this reason, the C standard library (libC) acts as a wrapper to most Linux system calls, by exposing C functions that, only whether it is needed, can transparently enter into the kernel which will execute on behalf of the calling process. For those system calls not exposed by libC, e.g. the fast userspace mutex (futex), the library provides a function called which can be used to explicitly invoke them.
Pseudo filesystems (e.g., the sysfs and procfs filesystems) and special files (e.g., /dev/random, /dev/sda, /dev/tty, and many others) constitute another layer of interface to kernel data structures representing hardware or logical (software) devices.
Kernel-to-userspace ABI
Because of the differences existing between the hundreds of various implementations of the Linux OS, executable objects, even though they are compiled, assembled, and linked for running on a specific hardware architecture (that is, they use the ISA of the target hardware), often cannot run on different Linux Distributions. This issue is mainly due to distribution-specific configurations and a set of patches applied to the code of the Linux kernel, differences in system libraries, services (daemons), filesystem hierarchies, and environment variables.
The main standard concerning application and binary compatibility of Linux distributions is the Linux Standard Base (LSB). However, the LSB goes beyond what concerns the Linux kernel, because it also defines the desktop specifications, the X libraries and Qt that have little to do with it. The LSB version 5 is built upon several standards and drafts (POSIX, SUS, X/Open, File System Hierarchy (FHS), and others).
The parts of the LSB largely relevant to the kernel are the General ABI (gABI), especially the System V ABI and the Executable and Linking Format (ELF), and the Processor Specific ABI (psABI), for example the Core Specification for X86-64.
The standard ABI for how x86_64 user programs invoke system calls is to load the syscall number into the rax register, and the other parameters into rdi, rsi, rdx, r10, r8, and r9, and finally to put the syscall assembly instruction in the code.
In-kernel API
There are several kernel internal APIs utilized between the different subsystems. Some are available only within the kernel subsystems, while a somewhat limited set of in-kernel symbols (i.e., variables, data structures, and functions) is exposed also to dynamically loadable modules (e.g., device drivers loaded on demand) whether they're exported with the and macros (the latter reserved to modules released under a GPL-compatible license).
Linux provides in-kernel APIs that manipulate data structures (e.g., linked lists, radix trees, red-black trees, queues) or perform common routines (e.g., copy data from and to user space, allocate memory, print lines to the system log, and so on) that have remained stable at least since Linux version 2.6.
In-kernel APIs include libraries of low-level common services used by device drivers:
SCSI Interfaces and libATA respectively, a peer-to-peer packet based communication protocol for storage devices attached to USB, SATA, SAS, Fibre Channel, FireWire, ATAPI device, and an in-kernel library to support [S]ATA host controllers and devices.
Direct Rendering Manager (DRM) and Kernel Mode Setting (KMS) for interfacing with GPUs and supporting the needs of modern 3D-accelerated video hardware, and for setting screen resolution, color depth and refresh rate
DMA buffers (DMA-BUF) for sharing buffers for hardware direct memory access across multiple device drivers and subsystems
Video4Linux for video capture hardware
Advanced Linux Sound Architecture (ALSA) for sound cards
New API for network interface controllers
mac80211 and cfg80211 - for wireless network interface controllers
In-kernel ABI
The Linux developers chose not to maintain a stable in-kernel ABI. Modules compiled for a specific version of the kernel cannot be loaded into another version without being recompiled, assuming that the in-kernel API has remained the same at the source level; otherwise, the module code must also be modified accordingly.
Processes and threads
Linux creates processes by means of the or by the newer system calls. Depending on the given parameters, the new entity can share most or none of the resources of the caller. These syscalls can create new entities ranging from new independent processes (each having a special identifier called TGID within the task_struct data structure in kernel space, although that same identifier is called PID in userspace), to new threads of execution within the calling process (by using the parameter). In this latter case the new entity owns the same TGID of the calling process and consequently has also the same PID in userspace.
If the executable is dynamically linked to shared libraries, a dynamic linker (for ELF objects, it is typically ) is used to find and load the needed objects, prepare the program to run and then run it.
The Native POSIX Thread Library, simply known as the NPTL, provides the standard POSIX threads interface (pthreads) to userspace Whenever a new thread is created using the pthread_create(3) POSIX interface, the family of system calls must also be given the address of the function that the new thread must jump to. The Linux kernel provides the (acronym for "Fast user-space mutexes") mechanisms for fast user-space locking and synchronization; the majority of the operations are performed in userspace but it may be necessary to communicate with the kernel using the system call.
A very special category of threads is the so-called kernel threads. They must not be confused with the above-mentioned threads of execution of the user's processes. Kernel threads exist only in kernel space and their only purpose is to concurrently run kernel tasks.
Differently, whenever an independent process is created, the syscalls return exactly to the next instruction of the same program, concurrently in parent process and in child's one (i.e., one program, two processes). Different return values (one per process) enable the program to know in which of the two processes it is currently executing. Programs need this information because the child process, a few steps after process duplication, usually invokes the system call (possibly via the family of wrapper functions in glibC) and replace the program that is currently being run by the calling process with a new program, with newly initialized stack, heap, and (initialized and uninitialized) data segments. When it is done, it results in two processes that run two different programs.
Depending on the effective user id (euid), and on the effective group id (egid), a process running with user zero privileges (root, the system administrator, owns the identifier 0) can perform everything (e.g., kill all the other processes or recursively wipe out whole filesystems), instead non zero user processes cannot. divides the privileges traditionally associated with superuser into distinct units, which can be independently enabled and disabled by the parent process or dropped by the child itself.
Scheduling and preemption
The Linux scheduler is modular, in the sense that it enables different scheduling classes and policies. Scheduler classes are plugable scheduler algorithms that can be registered with the base scheduler code. Each class schedules different types of processes. The core code of the scheduler iterates over each class in order of priority and chooses the highest priority scheduler that has a schedulable entity of type struct sched_entity ready to run. Entities may be threads, group of threads, and even all the processes of a specific user.
Linux provides both user preemption as well as full kernel preemption. Preemption reduces latency, increases responsiveness, and makes Linux more suitable for desktop and real-time applications.
For normal tasks, by default, the kernel uses the Completely Fair Scheduler (CFS) class, introduced in the 2.6.23 version of the kernel. Internally this default-scheduler class is defined in a macro of a C header as SCHED_NORMAL. In other POSIX kernels, a similar policy known as SCHED_OTHER allocates CPU timeslices (i.e, it assigns absolute slices of the processor time depending on either predetermined or dynamically computed priority of each process). The Linux CFS does away with absolute timeslices and assigns a fair proportion of CPU time, as a function of parameters like the total number of runnable processes and the time they have already run; this function also takes into account a kind of weight that depends on their relative priorities (nice values).
With user preemption, the kernel scheduler can replace the current process with the execution of a context switch to a different one that therefore acquires the computing resources for running (CPU, memory, and more). It makes it according to the CFS algorithm (in particular, it uses a variable called for sorting entities and then chooses the one that has the smaller vruntime, - i.e., the schedulable entity that has had the least share of CPU time), to the active scheduler policy and to the relative priorities. With kernel preemption, the kernel can preempt itself when an interrupt handler returns, when kernel tasks block, and whenever a subsystem explicitly calls the schedule() function.
The kernel also contains two POSIX-compliant real-time scheduling classes named SCHED_FIFO (realtime first-in-first-out) and SCHED_RR (realtime round-robin), both of which take precedence over the default class. An additional scheduling policy known as SCHED DEADLINE, implementing the earliest deadline first algorithm (EDF), was added in kernel version 3.14, released on 30 March 2014. SCHED_DEADLINE takes precedence over all the other scheduling classes.
Real-time PREEMPT_RT patches, included into the mainline Linux since version 2.6, provide a deterministic scheduler, the removal of preemption and interrupts disabling (where possible), PI Mutexes (i.e., locking primitives that avoid priority inversion), support for high precision event timers (HPET), preemptive Read-copy-update, (forced) IRQ threads, and other minor features.
Concurrency and synchronization
The kernel has different causes of concurrency (e.g., interrupts, bottom halves, preemption of kernel and users tasks, symmetrical multiprocessing). For protecting critical regions (sections of code that must be executed atomically), shared memory locations (like global variables and other data structures with global scope), and regions of memory that are asynchronously modifiable by hardware (e.g., having the C volatile type qualifier), Linux provides a large set of tools. They consist of atomic types (which can only be manipulated by a set of specific operators), spinlocks, semaphores, mutexes, and lockless algorithms (e.g., RCUs). Most lock-less algorithms are built on top of memory barriers for the purpose of enforcing memory ordering and prevent undesired side effects due to compiler's optimizations.
PREEMPT_RT code included in mainline Linux provide RT-mutexes, a special kind of Mutex which do not disable preemption and have support for priority inheritance. Almost all locks are changed into sleeping locks when using configuration for realtime operation. Priority inheritance avoids priority inversion by granting a low-priority task which holds a contended lock the priority of a higher-priority waiter until that lock is released.
Linux includes a kernel lock validator called Lockdep.
Interrupts management
The management of the interrupts, although it could be seen as a single job, is divided in two separate parts. This split in two is due to the different time constraints and to the synchronization needs of the tasks whose the management is composed of. The first part is made up of an asyncronous interrupt service routine that in Linux is known as the top half, while the second part is carried out by one of three types of the so-called bottom halves (softirq, tasklets, and work queues). Linux interrupts service routines can be nested (i.e., a new IRQ can trap into a high priority ISR that preempts any other lower priority ISRs).
Memory management
Memory management in Linux is a complex topic. First of all, the kernel is not pageable (i.e., it is always resident in physical memory and cannot be swapped to the disk). In the kernel there is no memory protection (no SIGSEGV signals, unlike in userspace), therefore memory violations lead to instability and system crashes.
Linux implements virtual memory with 4 and 5-levels page tables. As said, only user memory space is always pageable. It maintains information about each page frame of RAM in apposite data structures (of type ) that are populated immediately after boots and that are kept until shutdown, regardless of them being or not associated with virtual pages. Furthermore, it classifies all page frames in zones, according to their architecture dependent constraints and intended use. For example, pages reserved for DMA operations are in ZONE_DMA, pages that are not permanently mapped to virtual addresses are in ZONE_HIGHMEM (in x86_32 architecture this zone is for physical addresses above 896 MB, while x86_64 does not need it because x86_64 can permanently map physical pages that reside in higher addresses), and all that remains (with the exception of other less used classifications) is in ZONE_NORMAL.
Small chunks of memory can be dynamically allocated via the family of kmalloc() API and freed with the appropriate variant of kfree(). vmalloc() and kvfree() are used for large virtually contiguous chunks. alloc_pages() allocates the desired number of entire pages.
Kernel includes SLAB, SLUB and SLOB allocators as configurable alternatives. SLUB is the newest and it is also the default allocator. It aims for simplicity and efficiency. SLUB has been made PREEMPT_RT compatible.
Supported architectures
While not originally designed to be portable, Linux is now one of the most widely ported operating system kernels, running on a diverse range of systems from the ARM architecture to IBM z/Architecture mainframe computers. The first port was performed on the Motorola 68000 platform. The modifications to the kernel were so fundamental that Torvalds viewed the Motorola version as a fork and a "Linux-like operating system". However, that moved Torvalds to lead a major restructure of the code to facilitate porting to more computing architectures. The first Linux that, in a single source tree, had code for more than i386 alone, supported the DEC Alpha AXP 64-bit platform.
Linux runs as the main operating system on IBM's Summit; , all of the world's 500 fastest supercomputers run some operating system based on the Linux kernel, a big change from 1998 when the first Linux supercomputer got added to the list.
Linux has also been ported to various handheld devices such as Apple's iPhone 3G and iPod.
Supported devices
In 2007, the LKDDb project has been started to build a comprehensive database of hardware and protocols known by Linux kernels. The database is built automatically by static analysis of the kernel sources. Later in 2014 the Linux Hardware project was launched to automatically collect a database of all tested hardware configurations with the help of users of various Linux distributions.
Live patching
Rebootless updates can even be applied to the kernel by using live patching technologies such as Ksplice, kpatch and kGraft. Minimalistic foundations for live kernel patching were merged into the Linux kernel mainline in kernel version 4.0, which was released on 12 April 2015. Those foundations, known as livepatch and based primarily on the kernel's ftrace functionality, form a common core capable of supporting hot patching by both kGraft and kpatch, by providing an application programming interface (API) for kernel modules that contain hot patches and an application binary interface (ABI) for the userspace management utilities. However, the common core included into Linux kernel 4.0 supports only the x86 architecture and does not provide any mechanisms for ensuring function-level consistency while the hot patches are applied. , there is ongoing work on porting kpatch and kGraft to the common live patching core provided by the Linux kernel mainline.
Security
Kernel bugs present potential security issues. For example, they may allow for privilege escalation or create denial-of-service attack vectors. Over the years, numerous bugs affecting system security were found and fixed. New features are frequently implemented to improve the kernel's security.
Capabilities(7) have already been introduced in the section about the processes and threads. Android makes use of them and systemd gives administrators detailed control over the capabilities of processes.
Linux offers a wealth of mechanisms to reduce kernel attack surface and improve security which are collectively known as the Linux Security Modules (LSM). They comprise the Security-Enhanced Linux (SELinux) module, whose code has been originally developed and then released to the public by the NSA, and AppArmor among others. SELinux is now actively developed and maintained on GitHub. SELinux and AppArmor provide support to access control security policies, including mandatory access control (MAC), though they profoundly differ in complexity and scope.
Another security feature is the Seccomp BPF (SECure COMPuting with Berkeley Packet Filters) which works by filtering parameters and reducing the set of system calls available to user-land applications.
Critics have accused kernel developers of covering up security flaws, or at least not announcing them; in 2008, Linus Torvalds responded to this with the following:
Linux distributions typically release security updates to fix vulnerabilities in the Linux kernel. Many offer long-term support releases that receive security updates for a certain Linux kernel version for an extended period of time.
Development
Developer community
The community of Linux kernel developers comprises about 5000–6000 members. According to the "2017 State of Linux Kernel Development", a study issued by the Linux Foundation, covering the commits for the releases 4.8 to 4.13, about 1500 developers were contributing from about 200-250 companies on average. The top 30 developers contributed a little more than 16% of the code. As of companies, the top contributors are Intel (13.1%) and Red Hat (7.2%), Linaro (5.6%), IBM (4.1%), the second and fifth places are held by the 'none' (8.2%) and 'unknown' (4.1%) categories.
As with many large open-source software projects, developers are required to adhere to the Contributor Covenant, a code of conduct intended to address harassment of minority contributors. Additionally, to prevent offense the use of inclusive terminology within the source code is mandated.
Source code management
The Linux development community uses Git to manage the source code. Git users clone the latest version of Torvalds' tree with and keep it up to date using . Contributions are submitted as patches, in the form of text messages on the LKML (and often also on other mailing lists dedicated to particular subsystems). The patches must conform to a set of rules and to a formal language that, among other things, describes which lines of code are to be deleted and what others are to be added to the specified files. These patches can be automatically processed so that system administrators can apply them in order to make just some changes to the code or to incrementally upgrade to the next version. Linux is distributed also in GNU zip (gzip) and bzip2 formats.
Submitting code to the kernel
A developer who wants to change the Linux kernel starts with developing and testing that change. Depending on how significant the change is and how many subsystems it modifies, the change will either be submitted as a single patch or in multiple patches of source code. In case of a single subsystem that is maintained by a single maintainer, these patches are sent as e-mails to the maintainer of the subsystem with the appropriate mailing list in Cc. The maintainer and the readers of the mailing list will review the patches and provide feedback. Once the review process has finished the subsystem maintainer accepts the patches in the relevant Git kernel tree. If the changes to the Linux kernel are bug fixes that are considered important enough, a pull request for the patches will be sent to Torvalds within a few days. Otherwise, a pull request will be sent to Torvalds during the next merge window. The merge window usually lasts two weeks and starts immediately after the release of the previous kernel version. The Git kernel source tree names all developers who have contributed to the Linux kernel in the Credits directory and all subsystem maintainers are listed in Maintainers.
Programming language and coding style
Linux is written in a special C programming language supported by GCC, a compiler that extends in many ways the C standard, for example using inline sections of code written in the assembly language (in GCC's "AT&T-style" syntax) of the target architecture. Since 2002 all the code must adhere to the 21 rules comprising the Linux Kernel Coding Style.
GNU toolchain
The GNU Compiler Collection (GCC or GNU cc) is the default compiler for the mainline Linux sources and it is invoked by a utility called make. Then, the GNU Assembler (more often called GAS or GNU as) outputs the object files from the GCC generated assembly code. Finally, the GNU Linker (GNU ld) is used to produce a statically linked executable kernel file called . Both and are part of GNU Binary Utilities (binutils). The above-mentioned tools are collectively known as the GNU toolchain.
Compiler compatibility
GCC was for a long time the only compiler capable of correctly building Linux. In 2004, Intel claimed to have modified the kernel so that its C compiler was also capable of compiling it. There was another such reported success in 2009, with a modified 2.6.22 version.
Since 2010, effort has been underway to build Linux with Clang, an alternative compiler for the C language; as of 12 April 2014, the official kernel could almost be compiled by Clang. The project dedicated to this effort is named LLVMLinux after the LLVM compiler infrastructure upon which Clang is built. LLVMLinux does not aim to fork either Linux or the LLVM, therefore it is a meta-project composed of patches that are eventually submitted to the upstream projects. By enabling Linux to be compiled by Clang, developers may benefit from shorter compilation times.
In 2017, developers completed upstreaming patches to support building the Linux kernel with Clang in the 4.15 release, having backported support for X86-64 and AArch64 to the 4.4, 4.9, and 4.14 branches of the stable kernel tree. Google's Pixel 2 shipped with the first Clang built Linux kernel, though patches for Pixel (1st generation) did exist. 2018 saw ChromeOS move to building kernels with Clang by default, while Android (operating system) made Clang and LLVM's linker LLD required for kernel builds in 2019. Google moved its production kernel used throughout its datacenters to being built with Clang in 2020. Today, the ClangBuiltLinux group coordinates fixes to both Linux and LLVM to ensure compatibility, both composed of members from LLVMLinux and having upstreamed patches from LLVMLinux.
Kernel debugging
Bugs involving the Linux Kernel can be difficult to troubleshoot. This is because of the kernel's interaction with userspace and hardware; and also because they might be caused from a wider range of reasons compared to those of user programs. A few examples of the underlying causes are semantic errors in code, misuse of synchronization primitives, and incorrect hardware management.
A report of a non-fatal bug in the kernel is called an "oops"; such deviations from correct behavior of the Linux kernel may allow continued operation with compromised reliability.
A critical and fatal error is reported via the function. It prints a message and then halts the kernel.
One of the most common techniques used to find out bugs in code is debugging by printing. For this purpose Linux provides an in-kernel API called which stores messages in a circular buffer. The system call is used for reading and/or clearing the kernel message ring buffer and for setting the maximum log level of the messages to be sent to the console (i.e., one of the eight parameters of , which tell the severity of the condition reported); usually it is invoked via the glibC wrapper . Kernel messages are also exported to userland through the /dev/kmsg interface (e.g., systemd-journald reads that interface and by default append the messages to ).
Another fundamental technique for debugging a running kernel is tracing. The ftrace mechanism is a Linux internal tracer; it is used for monitoring and debugging Linux at runtime and it can also analyze user space latencies due to kernel misbehavior. Furthermore, ftrace allows users to trace Linux at boot-time.
kprobes and kretprobes can break (like debuggers in userspace) into Linux and non-disruptively collect information. kprobes can be inserted into code at (almost) any address, while kretprobes work at function return. uprobes have similar purposes but they also have some differences in usage and implementation.
With KGDB Linux can be debugged in much the same way as userspace programs. KGDB requires an additional machine that runs GDB and that is connected to the target to be debugged using a serial cable or Ethernet.
Development model
The Linux kernel project integrates new code on a rolling basis. Software checked into the project must work and compile without error. Each kernel subsystem is assigned a maintainer who is responsible for reviewing patches against the kernel code standards and keeps a queue of patches that can be submitted to Linus Torvalds within a merge window of several weeks. Patches are merged by Torvalds into the source code of the prior stable Linux kernel release, creating the -rc release candidate for the next stable kernel. Once the merge window is closed only fixes to the new code in the development release are accepted. The -rc development release of the kernel goes through regression tests and once it is judged to be stable by Torvalds and the kernel subsystem maintainers a new Linux kernel is released and the development process starts all over again.
Developers who feel treated unfairly can report this to the Linux Foundation's Technical Advisory Board. In July 2013, the maintainer of the USB 3.0 driver Sage Sharp asked Torvalds to address the abusive commentary in the kernel development community. In 2014, Sharp backed out of Linux kernel development, saying that "The focus on technical excellence, in combination with overloaded maintainers, and people with different cultural and social norms, means that Linux kernel maintainers are often blunt, rude, or brutal to get their job done". At the linux.conf.au (LCA) conference in 2018, developers expressed the view that the culture of the community has gotten much better in the past few years. Daniel Vetter, the maintainer of the Intel drm/i915 graphics kernel driver, commented that the "rather violent language and discussion" in the kernel community has decreased or disappeared.
Laurent Pinchart asked developers for feedback on their experience with the kernel community at the 2017 Embedded Linux Conference Europe. The issues brought up were discussed a few days later at the Maintainers Summit. Concerns over the lack of consistency in how maintainers responded to patches submitted by developers were echoed by Shuah Khan, the maintainer of the kernel self-test framework. Torvalds contended that there would never be consistency in the handling of patches because different kernel subsystems have, over time, adopted different development processes. Therefore, it was agreed upon that each kernel subsystem maintainer would document the rules for patch acceptance.
Mainline Linux
The Git tree of Linus Torvalds that contains the Linux kernel is referred to as mainline Linux. Every stable kernel release originates from the mainline tree, and is frequently published on kernel.org. Mainline Linux has only solid support for a small subset of the many devices that run Linux. Non-mainline support is provided by independent projects, such as Yocto or Linaro, but in many cases the kernel from the device vendor is needed. Using a vendor kernel likely requires a board support package.
Maintaining a kernel tree outside of mainline Linux has proven to be difficult.
Mainlining refers to the effort of adding support for a device to the mainline kernel, while there was formerly only support in a fork or no support at all. This usually includes adding drivers or device tree files. When this is finished, the feature or security fix is considered mainlined.
Linux-like kernel
The maintainer of the stable branch, Greg Kroah-Hartman, has applied the term Linux-like to downstream kernel forks by vendors that add millions of lines of code to the mainline kernel. In 2019, Google stated that they wanted to use the mainline Linux kernel in Android so the number of kernel forks would be reduced. The term Linux-like has also been applied to the Embeddable Linux Kernel Subset, which does not include the full mainline Linux kernel but a small modified subset of the code.
Linux forks
There are certain communities that develop kernels based on the official Linux. Some interesting bits of code from these forks (i.e., a slang term meaning "derived projects") that include Linux-libre, Compute Node Linux, INK, L4Linux, RTLinux, and User-Mode Linux (UML) have been merged into the mainline. Some operating systems developed for mobile phones initially used heavily modified versions of Linux, including Google Android, Firefox OS, HP webOS, Nokia Maemo and Jolla Sailfish OS. In 2010, the Linux community criticised Google for effectively starting its own kernel tree:
Today Android uses a slightly customized Linux where changes are implemented in device drivers so that little or no change to the core kernel code is required. Android developers also submit patches to the official Linux that finally can boot the Android operating system. For example, a Nexus 7 can boot and run the mainline Linux.
At a 2001 presentation at the Computer History Museum, Linus Torvalds had this to say in response to a question about distributions of Linux using precisely the same kernel sources or not:
Development community conflicts
There have been several notable conflicts among Linux kernel developers. Examples of such conflicts are:
In July 2007, Con Kolivas announced that he would cease developing for the Linux kernel.
In July 2009, Alan Cox quit his role as the TTY layer maintainer after disagreement with Linus Torvalds.
In December 2010, there was a discussion between Linux SCSI maintainer James Bottomley and SCST maintainer Vladislav Bolkhovitin about which SCSI target stack should be included in the Linux kernel. This made some Linux users upset.
In June 2012, Torvalds made it very clear that he did not agree with NVIDIA releasing its drivers as closed.
In April 2014, Torvalds banned Kay Sievers from submitting patches to the Linux kernel for failing to deal with bugs that caused systemd to negatively interact with the kernel.
In October 2014, Lennart Poettering accused Torvalds of tolerating the rough discussion style on Linux kernel related mailing lists and of being a bad role model.
In March 2015, Christoph Hellwig filed a lawsuit against VMware for infringement of the copyright on the Linux kernel. Linus Torvalds made it clear that he did not agree with this and similar initiatives by calling lawyers a festering disease.
In April 2021, a team from the University of Minnesota was found to be submitting "bad faith" patches to the kernel as part of their research. This resulted in the immediate reversion of all patches ever submitted by a member of the university. In addition, a warning was issued by a senior maintainer that any future patch from the university would be rejected on sight.
Prominent Linux kernel developers have been aware of the importance of avoiding conflicts between developers. For a long time there was no code of conduct for kernel developers due to opposition by Linus Torvalds. However, a Linux Kernel Code of Conflict was introduced on 8 March 2015. It was replaced on 16 September 2018 by a new Code of Conduct based on the Contributor Covenant. This coincided with a public apology by Torvalds and a brief break from kernel development. On 30 November 2018, complying with the Code of Conduct, Jarkko Sakkinen of Intel sent out patches replacing instances of "fuck" appearing in source code comments with suitable versions focused on the word 'hug'.
Codebase
, the 5.11 release of the Linux kernel had around 30.34 million lines of code. Roughly 14% of the code is part of the "core" (arch, kernel and mm directories), while 60% is drivers.
Estimated cost to redevelop
The cost to redevelop the Linux kernel version 2.6.0 in a traditional proprietary development setting has been estimated to be US$612 million (€467M, £394M) in 2004 prices using the COCOMO person-month estimation model. In 2006, a study funded by the European Union put the redevelopment cost of kernel version 2.6.8 higher, at €882M ($1.14bn, £744M).
This topic was revisited in October 2008 by Amanda McPherson, Brian Proffitt, and Ron Hale-Evans. Using David A. Wheeler's methodology, they estimated redevelopment of the 2.6.25 kernel now costs $1.3bn (part of a total $10.8bn to redevelop Fedora 9). Again, Garcia-Garcia and Alonso de Magdaleno from University of Oviedo (Spain) estimate that the value annually added to kernel was about €100M between 2005 and 2007 and €225M in 2008, it would cost also more than €1bn (about $1.4bn as of February 2010) to develop in the European Union.
, using then-current LOC (lines of code) of a 2.6.x Linux kernel and wage numbers with David A. Wheeler's calculations it would cost approximately $3bn (about €2.2bn) to redevelop the Linux kernel as it keeps getting bigger. An updated calculation , using then-current 20,088,609 LOC (lines of code) for the 4.14.14 Linux kernel and the current US National average programmer salary of $75,506 show it would cost approximately $14,725,449,000 dollars (£11,191,341,000) to rewrite the existing code.
Maintenance and long-term support
The latest kernel version and older kernel versions are maintained separately. Most latest kernel releases were supervised by Linus Torvalds. Current versions are released by Greg Kroah-Hartman.
The Linux kernel developer community maintains a stable kernel by applying fixes for software bugs that have been discovered during the development of the subsequent stable kernel. Therefore, www.kernel.org will always list two stable kernels. The next stable Linux kernel is now released only 8 to 12 weeks later. Therefore, the Linux kernel maintainers have designated some stable kernel releases as longterm, these long-term support Linux kernels are updated with bug fixes for two or more years. , there are six longterm Linux kernels: 5.15.23, 5.10.100, 5.4.179, 4.19.229, 4.14.266, and 4.9.301. The full list of releases is at Linux kernel version history.
Relation with Linux distributions
Most Linux users run a kernel supplied by their Linux distribution. Some distributions ship the "vanilla" or "stable" kernels. However, several Linux distribution vendors (such as Red Hat and Debian) maintain another set of Linux kernel branches which are integrated into their products. These are usually updated at a slower pace compared to the "vanilla" branch, and they usually include all fixes from the relevant "stable" branch, but at the same time they can also add support for drivers or features which had not been released in the "vanilla" version the distribution vendor started basing their branch from.
Legal aspects
Licensing terms
Initially, Torvalds released Linux under a license which forbade any commercial use. This was changed in version 0.12 by a switch to the GNU General Public License version 2 (GPLv2). This license allows distribution and sale of possibly modified and unmodified versions of Linux but requires that all those copies be released under the same license and be accompanied by - or that, on request, free access is given to - the complete corresponding source code. Torvalds has described licensing Linux under the GPLv2 as the "best thing I ever did".
The Linux kernel is licensed explicitly under GNU General Public License version 2 only (GPL-2.0-only) with an explicit syscall exception (Linux-syscall-note), without offering the licensee the option to choose any later version, which is a common GPL extension. Contributed code must be available under GPL-compatible license.
Nevertheless, Linux begun including binary blobs, which are proprietary, in its source tree and main distribution in 1996. This led to other projects starting work to remove the proprietary blobs in order to produce a 100% libre kernel, such as gNewSense in 2006, and BLAG in 2007, the work of both of which would eventually lead to the Linux-libre project being founded, which was made into an official GNU package in 2012.
There was considerable debate about how easily the license could be changed to use later GPL versions (including version 3), and whether this change is even desirable. Torvalds himself specifically indicated upon the release of version 2.4.0 that his own code is released only under version 2. However, the terms of the GPL state that if no version is specified, then any version may be used, and Alan Cox pointed out that very few other Linux contributors had specified a particular version of the GPL.
In September 2006, a survey of 29 key kernel programmers indicated that 28 preferred GPLv2 to the then-current GPLv3 draft. Torvalds commented, "I think a number of outsiders... believed that I personally was just the odd man out because I've been so publicly not a huge fan of the GPLv3." This group of high-profile kernel developers, including Torvalds, Greg Kroah-Hartman and Andrew Morton, commented on mass media about their objections to the GPLv3. They referred to clauses regarding DRM/tivoization, patents, "additional restrictions" and warned a Balkanisation of the "Open Source Universe" by the GPLv3. Linus Torvalds, who decided not to adopt the GPLv3 for the Linux kernel, reiterated his criticism even years later.
Loadable kernel modules
It is debated whether some loadable kernel modules (LKMs) are to be considered derivative works under copyright law, and thereby whether or not they fall under the terms of the GPL.
In accordance with the license rules, LKMs using only a public subset of the kernel interfaces are non-derived works, thus Linux gives system administrators the mechanisms to load out-of-tree binary objects into the kernel address space.
There are some out-of-tree loadable modules that make legitimate use of the dma_buf kernel feature. GPL compliant code can certainly use it. However, a different possible use case would be Nvidia Optimus that pairs a fast GPU with an Intel integrated GPU, where the Nvidia GPU writes into the Intel framebuffer when it is active. But, Nvidia cannot use this infrastructure because it necessitates bypassing a rule that can only be used by LKMs that are also GPL. Alan Cox replied on LKML, rejecting a request from one of their engineers to remove this technical enforcement from the API. Torvalds clearly stated on the LKML that "[I] claim that binary-only kernel modules ARE derivative "by default"'".
On the other hand, Torvalds has also said that "[one] gray area in particular is something like a driver that was originally written for another operating system (i.e., clearly not a derived work of Linux in origin). THAT is a gray area, and _that_ is the area where I personally believe that some modules may be considered to not be derived works simply because they weren't designed for Linux and don't depend on any special Linux behaviour". Proprietary graphics drivers, in particular, are heavily discussed.
Firmware binary blobs
The official kernel, that is the Linus git branch at the kernel.org repository, contains proprietary code (binary blobs) despite being released under the terms of the GNU GPLv2 (or later) license. Linux can also search filesystems to locate binary blobs, proprietary firmware, drivers, or other executable modules, then it can load and link them into kernel space. Whenever proprietary modules are loaded into Linux, the kernel marks itself as being "tainted", and therefore bug reports from tainted kernels will often be ignored by developers.
When it is needed (e.g., for accessing boot devices or for speed) firmware can be built-in to the kernel, this means building the firmware into vmlinux; however this is not always a viable option for technical or legal issues (e.g., it is not permitted to do this with firmware that is non-GPL compatible, although this is quite common nonetheless).
Trademark
Linux is a registered trademark of Linus Torvalds in the United States, the European Union, and some other countries. A legal battle over the trademark began in 1996, when William Della Croce, a lawyer who was never involved in the development of Linux, started requesting licensing fees for the use of the word Linux. After it was proven that the word was in common use long before Della Croce's claimed first use, the trademark was awarded to Torvalds.
See also
Notes
References
Further reading
External links
Linux kernel documentation index
Linux kernel man pages
Kernel bugzilla, and regressions for each recent kernel version
Kernel Newbies, a source of various kernel-related information
Kernel coverage at LWN.net, an authoritative source of kernel-related information
Bootlin's Elixir Cross Referencer, a Linux kernel source code cross-reference
Finnish inventions
Free software programmed in C
Free system software
Software using the GPL license
Linus Torvalds
Monolithic kernels
Unix variants
Operating systems
Free and open-source software |
33685698 | https://en.wikipedia.org/wiki/Ada%20Initiative | Ada Initiative | The Ada Initiative was a non-profit organization that sought to increase women's participation in the free culture movement, open source technology and open culture. The organization was founded in 2011 by Linux kernel developer and open source advocate Valerie Aurora and open source developer and advocate Mary Gardiner (the founder of AussieChix, the largest organization for women in open source in Australia). It was named after Ada Lovelace, who is often celebrated as the world's first computer programmer, as is the Ada programming language. In August 2015, the Ada Initiative board announced that the organization would shut down in October 2015. According to the announcement, the Initiative's executive leadership decided to step down, and the organization was unable to find acceptable replacement leaders.
History
Valerie Aurora, already an activist for women in open source, joined Mary Gardiner and members of Geek Feminism to develop anti-harassment policies for conferences after Noirin Shirley was sexually assaulted at ApacheCon 2010. Aurora quit her job as a Linux kernel developer at Red Hat and, with Gardiner, founded the Ada Initiative in February 2011.
In 2014, Valerie Aurora announced her intent to step down as executive director of the Ada Initiative, and an executive search committee was formed to find her replacement. Mary Gardiner, deputy executive director, chose not to be a candidate. The committee, headed by Sumana Harihareswara and Mary Gardiner, announced in March 2015 that the Ada Initiative had hired Crystal Huff as the new executive director. Huff, formerly of Luminoso in Boston, continued to work from Massachusetts in her new role.
In August 2015, the Ada Initiative announced that the organization would close in mid-October, 2015. The announcement described the leadership challenge facing the Initiative: neither co-founder intended to continue as executive director. According to the post on the Ada Initiative website: We felt the likelihood of finding a new ED who could effectively fit into Valerie’s shoes was low. We also considered several other options for continuing the organization, including changing its programs, or becoming volunteer-only. After much deliberation, the board decided to do an orderly shutdown of the Ada Initiative, in which the organization would open source all of our remaining knowledge and expertise in freely reusable and modifiable form. We don’t feel like non-profits need to exist forever. The Ada Initiative did a lot of great work, and we are happy about it. The previous hire of Crystal Huff, announced several months earlier, was not mentioned other than to note "that hire didn't work out."
Administration
All services provided by the Ada Initiative were pro bono, and the organization was supported by member donations. In the summer of 2011, the Ada Initiative launched a campaign to raise start-up funds with a goal of contributions from 100 funders. The campaign wrapped up six days before its planned deadline. The organization's first major sponsor was Linux Australia, who provided support alongside Puppet Labs, DreamHost, The Mail Archive and Google. Aurora and Gardiner were the only staff members, serving full-time roles in the organization.
Board and advisory board
The Ada Initiative was governed by a seven-person board of directors, who oversaw its management. The board included co-founder Mary Gardiner, Sue Gardner, Amelia Greenhall, Rachel Chalmers, Alicia Gibb, Andrea Horbinski and Marina Zhurakhinskaya. An advisory board of about 30 members provided input about ideas and projects.
Initiatives
In collaboration with members of LinuxChix, Geek Feminism and other groups, the Ada Initiative developed anti-harassment policies for conferences. The Ada Initiative also worked with open source conference organizers to adopt, create and communicate policies to make conferences safer and more inviting for all attendees, particularly women. Conferences such as Ubuntu Developer Summits and all Linux Foundation events, including LinuxCon, have adopted policies based on the Ada Initiative's work.
The Ada Initiative developed policy framework for creating a Women in Open Source Scholarship and programming guides for outreach projects and events. The organization also hosted workshops and training. These workshops and programs consisted of Allies Workshops for male and institutional supporters and "First Patch Week" programs, which encourages women's participation in Free and open source software (FOSS) through mentoring. The workshop framework is freely available, although the Ada Initiative also offered facilitators to conduct the workshops in person.
By encouraging women's participation in open source culture, the Ada Initiative encouraged women to engage in open source professionally and full-time- not just as volunteers. The organization also researched women's roles and experiences in open source, focusing on bringing research up to date; the last survey done of the gender balance in open source had been completed in 2006. Research methodology and a new survey were produced in 2011. A repeat of the survey took place in 2013, with hopes to provide a standard resource for the industry. The 2011 survey invited participants of any gender and inquired about subjects regarding open source and free software, hardware, open mapping, and other related open source areas, as well as free culture such as Creative Commons, online activism, mashup, maker, hacker spaces and related communities.
The Ada Initiative was the organizer of AdaCamp, an unconference "dedicated to increasing women’s participation in open technology and culture." Seven AdaCamps were held between 2012 and 2015.
Violet Blue's security presentation
In February 2013, the organizers of the Security B-Sides San Francisco conference canceled speaker Violet Blue's talk, sex +/- drugs: known vulns and exploits, due to concerns raised by the Ada Initiative that it contained rape triggers, as well as the Ada Initiative's consideration of the subject as off-topic for a security conference. The abrupt cancellation provoked intense discussion in the information security industry. Since the event at B-Sides SF, lead organizer Ian Fung has outlined his account of the interactions between Blue, Aurora, and the Ada Initiative on the B-Sides SF front page, contradicting some of the claims made by both the Ada Initiative and Blue.
See also
Ada Project, The
Anita Borg Institute for Women and Technology
Contributor Covenant
Discrimination
Sexism in the technology industry
Women in computing
References
External links
Official website
Census, March 2011: Demographic breakdown of responses from the Ada Initiative.
Ada Initiative Census Results Part 2
Information technology organizations based in North America
Women's organizations based in the United States
Organizations for women in science and technology
Defunct organizations based in the United States
Free and open-source software organizations
2011 establishments in the United States
Organizations established in 2011
2015 disestablishments in the United States
Women in computing |
42253 | https://en.wikipedia.org/wiki/Data%20mining | Data mining | Data mining is a process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.
The term "data mining" is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book Data mining: Practical machine learning tools and techniques with Java (which covers mostly machine learning material) was originally to be named just Practical machine learning, and the term data mining was only added for marketing reasons. Often the more general terms (large scale) data analysis and analytics—or, when referring to actual methods, artificial intelligence and machine learning—are more appropriate.
The actual data mining task is the semi-automatic or automatic analysis of large quantities of data to extract previously unknown, interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining, sequential pattern mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting is part of the data mining step, but do belong to the overall KDD process as additional steps.
The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data; in contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.
The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations.
Etymology
In the 1960s, statisticians and economists used terms like data fishing or data dredging to refer to what they considered the bad practice of analyzing data without an a-priori hypothesis. The term "data mining" was used in a similarly critical way by economist Michael Lovell in an article published in the Review of Economic Studies in 1983. Lovell indicates that the practice "masquerades under a variety of aliases, ranging from "experimentation" (positive) to "fishing" or "snooping" (negative).
The term data mining appeared around 1990 in the database community, generally with positive connotations. For a short time in 1980s, a phrase "database mining"™, was used, but since it was trademarked by HNC, a San Diego-based company, to pitch their Database Mining Workstation; researchers consequently turned to data mining. Other terms used include data archaeology, information harvesting, information discovery, knowledge extraction, etc. Gregory Piatetsky-Shapiro coined the term "knowledge discovery in databases" for the first workshop on the same topic (KDD-1989) and this term became more popular in AI and machine learning community. However, the term data mining became more popular in the business and press communities. Currently, the terms data mining and knowledge discovery are used interchangeably.
In the academic community, the major forums for research started in 1995 when the First International Conference on Data Mining and Knowledge Discovery (KDD-95) was started in Montreal under AAAI sponsorship. It was co-chaired by Usama Fayyad and Ramasamy Uthurusamy. A year later, in 1996, Usama Fayyad launched the journal by Kluwer called Data Mining and Knowledge Discovery as its founding editor-in-chief. Later he started the SIGKDD Newsletter SIGKDD Explorations. The KDD International conference became the primary highest quality conference in data mining with an acceptance rate of research paper submissions below 18%. The journal Data Mining and Knowledge Discovery is the primary research journal of the field.
Background
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As data sets have grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.
Process
The knowledge discovery in databases (KDD) process is commonly defined with the stages:
Selection
Pre-processing
Transformation
Data mining
Interpretation/evaluation.
It exists, however, in many variations on this theme, such as the Cross-industry standard process for data mining (CRISP-DM) which defines six phases:
Business understanding
Data understanding
Data preparation
Modeling
Evaluation
Deployment
or a simplified process such as (1) Pre-processing, (2) Data Mining, and (3) Results Validation.
Polls conducted in 2002, 2004, 2007 and 2014 show that the CRISP-DM methodology is the leading methodology used by data miners. The only other data mining standard named in these polls was SEMMA. However, 3–4 times as many people reported using CRISP-DM. Several teams of researchers have published reviews of data mining process models, and Azevedo and Santos conducted a comparison of CRISP-DM and SEMMA in 2008.
Pre-processing
Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is a data mart or data warehouse. Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data.
Data mining
Data mining involves six common classes of tasks:
Anomaly detection (outlier/change/deviation detection) – The identification of unusual data records, that might be interesting or data errors that require further investigation.
Association rule learning (dependency modeling) – Searches for relationships between variables. For example, a supermarket might gather data on customer purchasing habits. Using association rule learning, the supermarket can determine which products are frequently bought together and use this information for marketing purposes. This is sometimes referred to as market basket analysis.
Clustering – is the task of discovering groups and structures in the data that are in some way or another "similar", without using known structures in the data.
Classification – is the task of generalizing known structure to apply to new data. For example, an e-mail program might attempt to classify an e-mail as "legitimate" or as "spam".
Regression – attempts to find a function that models the data with the least error that is, for estimating the relationships among data or datasets.
Summarization – providing a more compact representation of the data set, including visualization and report generation.
Results validation
Data mining can unintentionally be misused, and can then produce results that appear to be significant; but which do not actually predict future behavior and cannot be reproduced on a new sample of data and bear little use. Often this results from investigating too many hypotheses and not performing proper statistical hypothesis testing. A simple version of this problem in machine learning is known as overfitting, but the same problem can arise at different phases of the process and thus a train/test split—when applicable at all—may not be sufficient to prevent this from happening.
The final step of knowledge discovery from data is to verify that the patterns produced by the data mining algorithms occur in the wider data set. Not all patterns found by data mining algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. This is called overfitting. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained. The learned patterns are applied to this test set, and the resulting output is compared to the desired output. For example, a data mining algorithm trying to distinguish "spam" from "legitimate" emails would be trained on a training set of sample e-mails. Once trained, the learned patterns would be applied to the test set of e-mails on which it had not been trained. The accuracy of the patterns can then be measured from how many e-mails they correctly classify. Several statistical methods may be used to evaluate the algorithm, such as ROC curves.
If the learned patterns do not meet the desired standards, subsequently it is necessary to re-evaluate and change the pre-processing and data mining steps. If the learned patterns do meet the desired standards, then the final step is to interpret the learned patterns and turn them into knowledge.
Research
The premier professional body in the field is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Knowledge Discovery and Data Mining (SIGKDD). Since 1989, this ACM SIG has hosted an annual international conference and published its proceedings, and since 1999 it has published a biannual academic journal titled "SIGKDD Explorations".
Computer science conferences on data mining include:
CIKM Conference – ACM Conference on Information and Knowledge Management
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
KDD Conference – ACM SIGKDD Conference on Knowledge Discovery and Data Mining
Data mining topics are also present on many data management/database conferences such as the ICDE Conference, SIGMOD Conference and International Conference on Very Large Data Bases
Standards
There have been some efforts to define standards for the data mining process, for example, the 1999 European Cross Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining standard (JDM 1.0). Development on successors to these processes (CRISP-DM 2.0 and JDM 2.0) was active in 2006 but has stalled since. JDM 2.0 was withdrawn without reaching a final draft.
For exchanging the extracted models—in particular for use in predictive analytics—the key standard is the Predictive Model Markup Language (PMML), which is an XML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example) subspace clustering have been proposed independently of the DMG.
Notable uses
Data mining is used wherever there is digital data available today. Notable examples of data mining can be found throughout business, medicine, science, and surveillance.
Privacy concerns and ethics
While the term "data mining" itself may have no ethical implications, it is often associated with the mining of information in relation to peoples' behavior (ethical and otherwise).
The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns.
Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining per se, but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.
It is recommended to be aware of the following before data are collected:
The purpose of the data collection and any (known) data mining projects;
How the data will be used;
Who will be able to mine the data and use the data and their derivatives;
The status of security surrounding access to the data;
How collected data can be updated.
Data may also be modified so as to become anonymous, so that individuals may not readily be identified. However, even "anonymized" data sets can potentially contain enough information to allow identification of individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.
The inadvertent revelation of personally identifiable information leading to the provider violates Fair Information Practices. This indiscretion can cause financial,
emotional, or bodily harm to the indicated individual. In one instance of privacy violation, the patrons of Walgreens filed a lawsuit against the company in 2011 for selling
prescription information to data mining companies who in turn provided the data
to pharmaceutical companies.
Situation in Europe
Europe has rather strong privacy laws, and efforts are underway to further strengthen the rights of the consumers. However, the U.S.–E.U. Safe Harbor Principles, developed between 1998 and 2000, currently effectively expose European users to privacy exploitation by U.S. companies. As a consequence of Edward Snowden's global surveillance disclosure, there has been increased discussion to revoke this agreement, as in particular the data will be fully exposed to the National Security Agency, and attempts to reach an agreement with the United States have failed.
In the United Kingdom in particular there have been cases of corporations using data mining as a way to target certain groups of customers forcing them to pay unfairly high prices. These groups tend to be people of lower socio-economic status who are not savvy to the ways they can be exploited in digital market places.
Situation in the United States
In the United States, privacy concerns have been addressed by the US Congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act (HIPAA). The HIPAA requires individuals to give their "informed consent" regarding information they provide and its intended present and future uses. According to an article in Biotech Business Week, "'[i]n practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena,' says the AAHC. More importantly, the rule's goal of protection through informed consent is approach a level of incomprehensibility to average individuals." This underscores the necessity for data anonymity in data aggregation and mining practices.
U.S. information privacy legislation such as HIPAA and the Family Educational Rights and Privacy Act (FERPA) applies only to the specific areas that each such law addresses. The use of data mining by the majority of businesses in the U.S. is not controlled by any legislation.
Copyright law
Situation in Europe
Under European copyright and database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is not legal. Where a database is pure data in Europe, it may be that there is no copyright—but database rights may exist so data mining becomes subject to intellectual property owners' rights that are protected by the Database Directive. On the recommendation of the Hargreaves review, this led to the UK government to amend its copyright law in 2014 to allow content mining as a limitation and exception. The UK was the second country in the world to do so after Japan, which introduced an exception in 2009 for data mining. However, due to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law also does not allow this provision to be overridden by contractual terms and conditions.
Since 2020 also Switzerland has been regulating data mining by allowing it in the research field under certain conditions laid down by art. 24d of the Swiss Copyright Act. This new article entered into force 1st of April 2020.
The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licences for Europe. The focus on the solution to this legal issue, such as licensing rather than limitations and exceptions, led to representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013.
Situation in the United States
US copyright law, and in particular its provision for fair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one being text and data mining.
Software
Free open-source data mining software and applications
The following applications are available under free/open-source licenses. Public access to application source code is also available.
Carrot2: Text and search results clustering framework.
Chemicalize.org: A chemical structure miner and web search engine.
ELKI: A university research project with advanced cluster analysis and outlier detection methods written in the Java language.
GATE: a natural language processing and language engineering tool.
KNIME: The Konstanz Information Miner, a user-friendly and comprehensive data analytics framework.
Massive Online Analysis (MOA): a real-time big data stream mining with concept drift tool in the Java programming language.
MEPX: cross-platform tool for regression and classification problems based on a Genetic Programming variant.
mlpack: a collection of ready-to-use machine learning algorithms written in the C++ language.
NLTK (Natural Language Toolkit): A suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python language.
OpenNN: Open neural networks library.
Orange: A component-based data mining and machine learning software suite written in the Python language.
PSPP: Data mining and statistics software under the GNU Project similar to SPSS
R: A programming language and software environment for statistical computing, data mining, and graphics. It is part of the GNU Project.
Scikit-learn: an open-source machine learning library for the Python programming language
Torch: An open-source deep learning library for the Lua programming language and scientific computing framework with wide support for machine learning algorithms.
UIMA: The UIMA (Unstructured Information Management Architecture) is a component framework for analyzing unstructured content such as text, audio and video – originally developed by IBM.
Weka: A suite of machine learning software applications written in the Java programming language.
Proprietary data-mining software and applications
The following applications are available under proprietary licenses.
Angoss KnowledgeSTUDIO: data mining tool
LIONsolver: an integrated software application for data mining, business intelligence, and modeling that implements the Learning and Intelligent OptimizatioN (LION) approach.
PolyAnalyst: data and text mining software by Megaputer Intelligence.
Microsoft Analysis Services: data mining software provided by Microsoft.
NetOwl: suite of multilingual text and entity analytics products that enable data mining.
Oracle Data Mining: data mining software by Oracle Corporation.
PSeven: platform for automation of engineering simulation and analysis, multidisciplinary optimization and data mining provided by DATADVANCE.
Qlucore Omics Explorer: data mining software.
RapidMiner: An environment for machine learning and data mining experiments.
SAS Enterprise Miner: data mining software provided by the SAS Institute.
SPSS Modeler: data mining software provided by IBM.
STATISTICA Data Miner: data mining software provided by StatSoft.
Tanagra: Visualisation-oriented data mining software, also for teaching.
Vertica: data mining software provided by Hewlett-Packard.
Google Cloud Platform: automated custom ML models managed by Google.
Amazon SageMaker: managed service provided by Amazon for creating & productionising custom ML models.
See also
Methods
Application domains
Application examples
Related topics
For more information about extracting information out of data (as opposed to analyzing data) , see:
Other resources
International Journal of Data Warehousing and Mining
References
Further reading
Cabena, Peter; Hadjnian, Pablo; Stadler, Rolf; Verhees, Jaap; Zanasi, Alessandro (1997); Discovering Data Mining: From Concept to Implementation, Prentice Hall,
M.S. Chen, J. Han, P.S. Yu (1996) "Data mining: an overview from a database perspective". Knowledge and data Engineering, IEEE Transactions on 8 (6), 866–883
Feldman, Ronen; Sanger, James (2007); The Text Mining Handbook, Cambridge University Press,
Guo, Yike; and Grossman, Robert (editors) (1999); High Performance Data Mining: Scaling Algorithms, Applications and Systems, Kluwer Academic Publishers
Han, Jiawei, Micheline Kamber, and Jian Pei. Data mining: concepts and techniques. Morgan kaufmann, 2006.
Hastie, Trevor, Tibshirani, Robert and Friedman, Jerome (2001); The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer,
Liu, Bing (2007, 2011); Web Data Mining: Exploring Hyperlinks, Contents and Usage Data, Springer,
Nisbet, Robert; Elder, John; Miner, Gary (2009); Handbook of Statistical Analysis & Data Mining Applications, Academic Press/Elsevier,
Poncelet, Pascal; Masseglia, Florent; and Teisseire, Maguelonne (editors) (October 2007); "Data Mining Patterns: New Methods and Applications", Information Science Reference,
Tan, Pang-Ning; Steinbach, Michael; and Kumar, Vipin (2005); Introduction to Data Mining,
Theodoridis, Sergios; and Koutroumbas, Konstantinos (2009); Pattern Recognition, 4th Edition, Academic Press,
Weiss, Sholom M.; and Indurkhya, Nitin (1998); Predictive Data Mining, Morgan Kaufmann
(See also Free Weka software)
Ye, Nong (2003); The Handbook of Data Mining, Mahwah, NJ: Lawrence Erlbaum
External links
Formal sciences |
47231784 | https://en.wikipedia.org/wiki/Teofilo%20F.%20Gonzalez | Teofilo F. Gonzalez | Teofilo Francisco Gonzalez Arce (born January 26, 1948 in Monterrey, Mexico) is a Mexican-American computer scientist who is professor emeritus of computer science at the University of California, Santa Barbara.
In 1972, Gonzalez was one of the first students who earned a bachelor's degree in computer science (Ingeniero en Sistemas Computacionales) in Mexico, at the Monterrey Institute of Technology and Higher Education.
He completed his Ph.D. in 1975 from the University of Minnesota under the supervision of Sartaj Sahni. He taught at the University of Oklahoma from 1975 to 1976, at the Pennsylvania State University from 1976 to 1979, at the Monterrey Institute of Technology and Higher Education from 1979 to 1980, and at the University of Texas at Dallas from 1980 to 1984, before joining the UCSB computer science faculty in 1984. He spent Sabbatical Leaves at Utrecht University (1990) in the Netherlands and the Monterrey Institute of Technology and Higher Education. Professor Gonzalez became a Fellow of IASTED in 2009.
Gonzalez is known for his highly cited pioneering research in the hardness of approximation; for his sub-linear and best possible approximation algorithm (unless P = NP) based on the farthest-first traversal for the metric k-center problem (k-tMM clustering); and for introducing the open-shop scheduling problem as well as algorithms for its solution that have found numerous applications in several research areas as well as for his research on flow shop scheduling, and job shop scheduling algorithms. He is the editor of the Handbook on Approximation Algorithms and Metaheuristics first edition, second edition and he is co-editor of Volume 1 (Computer Science and Software Engineering) of the Computing Handbook Set.
Selected publications
References
External links
Home page
Google scholar profile
1948 births
Living people
Monterrey Institute of Technology and Higher Education alumni
University of Minnesota College of Science and Engineering alumni
American academics of Mexican descent
Mexican emigrants to the United States
American computer scientists
Mexican computer scientists
Theoretical computer scientists
University of Oklahoma faculty
Pennsylvania State University faculty
Monterrey Institute of Technology and Higher Education faculty
University of Texas at Dallas faculty
University of California, Santa Barbara faculty |
31664744 | https://en.wikipedia.org/wiki/Supercomputing%20in%20India | Supercomputing in India | Supercomputing in India has a history going back to the 1980s. The Government of India created an indigenous development programme as they had difficulty purchasing foreign supercomputers. when ranking by number of supercomputer systems in the TOP500 list, India is ranked 63rd in the world, with the PARAM Siddhi-AI being the fastest supercomputer in India.
History
Early years
India had faced difficulties in the 1980s when trying to purchase supercomputers for academic and weather forecasting purposes. In 1986 the National Aerospace Laboratories (NAL) started the Flosolver project to develop a computer for computational fluid dynamics and aerospace engineering. The Flosolver MK1, described as a parallel processing system, started operations in December 1986.
Indigenous development programme
In 1987 the Indian Government had requested to purchase a Cray X-MP supercomputer; this request was denied by the United States government as the machine could have a dual use in weapons development. After this problem, in the same year, the Government of India decided to promote an indigenous supercomputer development programme. Multiple projects were commissioned from different groups including the Centre for Development of Advanced Computing (C-DAC), the Centre for Development of Telematics (C-DOT), the National Aerospace Laboratories (NAL), the Bhabha Atomic Research Centre (BARC), and the Advanced Numerical Research and Analysis Group (ANURAG). C-DOT created "CHIPPS": the C-DOT High-Performance Parallel Processing System. NAL had started to develop the Flosolver in 1986. BARC created the Anupam series of supercomputers. ANURAG created the PACE series of supercomputers.
C-DAC First Mission
The Centre for Development of Advanced Computing (C-DAC) was created at some point between November 1987 and August 1988. C-DAC was given an initial 3 year budget of Rs375 million to create a 1000MFLOPS (1GFLOPS) supercomputer by 1991. C-DAC unveiled the PARAM 8000 supercomputer in 1991. This was followed by the PARAM 8600 in 1992/1993. These machines demonstrated Indian technological prowess to the world and led to export success.
C-DAC Second Mission
The PARAM 8000 was considered a success for C-DAC in delivering a gigaFLOPS range parallel computer. From 1992 C-DAC undertook its "Second Mission" to deliver a 100 GFLOPS range computer by 1997/1998. The plan was to allow the computer to scale to 1 teraFLOPS. In 1993 the PARAM 9000 series of supercomputers was released, which had a peak computing power of 5 GFLOPS. In 1998 the PARAM 10000 was released; this had a sustained performance of 38 GFLOPS on the LINPACK benchmark.
C-DAC Third Mission
The C-DAC's third mission was to develop a teraFLOPS range computer. The PARAM Padma was delivered in December 2002. This was the first Indian supercomputer to feature on a list of the world's fastest supercomputers, in June 2003.
Development by other groups in the early 2000s
By the early 2000s it was noted that only ANURAG, BARC, C-DAC and NAL were continuing development of their supercomputers. NAL's Flosolver had 4 subsequent machines built in its series. At the same time ANURAG continued to develop PACE, primarily based on SPARC processors.
12th Five Year Plan
The Indian Government has proposed to commit 2.5 billion USD to supercomputing research during the 12th Five-Year Plan period (2012–2017). The project will be handled by Indian Institute of Science (IISc), Bangalore. Additionally, it was later revealed that India plans to develop a supercomputer with processing power in the exaflops range. It will be developed by C-DAC within the subsequent five years of approval.
National Supercomputing Mission
In 2015 the Ministry of Electronics and Information Technology announced a "National Supercomputing Mission" (NSM) to install 73 indigenous supercomputers throughout the country by 2022. This is a seven-year program worth $730 million (Rs. 4,500 crore). Whilst previously computer were assembled in India, the NSM aims to produce the components within the country. The NSM is being implemented by C-DAC and the Indian Institute of Science.
The aim is to create a cluster of geographically-distributed high-performance computing centers linked over a high-speed network, connecting various academic and research institutions across India. This has been dubbed the "National Knowledge Network" (NKN). The mission involves both capacity and capability machines and includes standing up three petascale supercomputers.
The first phase involved deployment of supercomputers which have 60% Indian components. The second phase machines are intended to have an Indian designed processor, with a completion date of April 2021. The third and final phase intends to deploy fully indigenous supercomputers, with an aimed speed of 45 petaFLOPS within the NKN.
By October 2020, the first assembled in India supercomputer had been installed. The NSM hopes to have the manufacturing capability for indigenous production by December 2020.
Rankings
Current TOP500
there are 3 systems based in India on the TOP500 supercomputer list.
India's historical rank in TOP500
See also
Computers
EKA (supercomputer)
PARAM
Wipro Supernova
General
History of supercomputing
Supercomputing in China
Supercomputing in Europe
Supercomputing in Japan
TOP500
References
Supercomputer sites
Supercomputing
Science and technology in India
Information technology in India |
13590849 | https://en.wikipedia.org/wiki/Ronald%20Baecker | Ronald Baecker | Ronald Baecker (born October 7, 1942) is an Emeritus Professor of Computer Science and Bell Chair in Human-Computer Interaction at the University of Toronto (U of T). He was the co-founder of the Dynamic Graphics Project, and is the founder of the Knowledge Media Design Institute (KMDI) and the Technologies for Aging Gracefully Lab (TAGlab). He is the author of Computers and Society: Modern Perspectives, published by Oxford University Press in 2019.
Summary of research interests
Dr. Baecker is an expert in human-computer interaction (HCI), user interface (UI) design, software visualization, multimedia, computer-supported cooperative work and learning, and entrepreneurship in the software industry.
From 1966 through 1969, Dr. Baecker developed the first comprehensive conceptual framework for computer animation on Genesys, a foundational computer animation system that he himself designed and built at the MIT Lincoln Laboratory. This work helped launch the field of computer animation. He also developed Smalltalk, a novel computer animation system for children, at the Xerox Palo Alto Research Center in 1974 and worked on interactive computer graphics and guided the realization and testing of the animated icon from 1988 through 1990.
Continuing on his work on software visualization, between 1973 and 1981, he produced the film Sorting Out Sorting – a seminal piece elucidating the potential of computer animation and behavior that propelled the field of algorithm animation. In addition to the film, Dr. Baecker and his colleagues presented a systematic and comprehensive new approach to enhancing the presentation of computer program source using graphic design principles (1982–88), formulated conceptual frameworks for software visualization (1981–92), and constructed powerful yet unobtrusive systems for the visualization of programs in particular programming languages, and then applied it to the LOGO system(1988–94).
From the 1990s through the 2010s, he created two innovative collaborative multimedia technologies. His team was the first group to employ hierarchically structured multimedia for the interactive authoring of digital video and other dynamic visual presentations, and the first to apply such a system to the creation of materials for software support and training. Additionally, his team worked on the use of highly interactive webcasting with structured rich media archives as the ePresence Interactive Media environment for collaborative learning.
Furthermore, Dr. Baecker founded TAGlab to research the design of technologies for aging gracefully and was a founding researcher in AGE-WELL, Canada's Technology and Aging research network. This research is primarily focused on increasing computer literacy among seniors and using technology to help individuals work better and be safer. His current research has been centered on envisioning, designing, building, and evaluating technological aids intended for individuals with Alzheimer's disease, mild cognitive impairment, amnesia, vision loss, and stroke, and the natural consequences of aging.
Systems built include the development of online social gaming environments for seniors, online brain fitness environments, mobile phone software to help individuals with communication challenges to speak, e-book software to help people with visual or motor impairments to read, digital communicating picture frame technology to help connect individuals who are isolated and lonely to family and friends, and a thanatosensitive design of web portals to support individuals who are grieving.
Early life
Dr. Baecker was born in Kenosha, Wisconsin on October 7, 1942. When he was four, his family moved to Pittsburgh, Pennsylvania. His scientific career got off to an early start, when, in April 1958, he was awarded third prize in the American Chemical Society Chemistry test (Pittsburgh Section). He went on to win several other science honors while at Taylor Allderdice High School, including attending the Westinghouse Science Honors Institute (October 1958 to April 1959), and the Sun-Telegraph, Pittsburgh and Allegheny Count High Schools Scholastic Award in Science (1959).
In 1960, he began work as a summer research assistant at Koppers Company Research Labs. He was awarded a B.S. degree in Physics from the Massachusetts Institute of Technology (MIT) in 1963 and received an M.S. degree in the then burgeoning field of Electrical Engineering, again from MIT in 1964. At the end of 1964, he traveled to the University of Heidelberg in Germany to further his studies in applied mathematics where he stayed until July 1965. He returned to his alma mater to study Computer Science at MIT's Department of Electrical Engineering and received his Ph.D. in June 1969.
Professional career
Entrepreneurial and management experience
Dr. Baecker has started five software companies, three of which he led as founding CEO.
His first start-up, Human Computing Resources, later renamed to HCR Corporation, was founded in 1976 with an investment of $11,000 and eventually became a multi-million dollar world-class UNIX systems software firm. In 1990, it was sold the Santa Cruz Operation (SCO) and continued as SCO Canada until early 1996.
His second venture, Expresto Software Corporation attempted to commercialize video authoring and publishing technology; in 2002, it was sold to a shareholder. Following the dissolution of Expresto, Dr. Baecker originated the concept and led the development for a successful proposal for an NSERC Research Network Grant entitled the Network for Effective Collaboration Through Advanced Research (NECTAR).
Between 1995 and 1998, Dr. Baecker founded the Knowledge Media Design Institute at the Faculty of Information at the University of Toronto. This was the first institute at U of T to address interdisciplinary issues within the production, creation, and distribution of knowledge media. In 2002, KMDI sponsored the development of a knowledge media design collaborative graduate degree program. He continues to serve as the Institute's Chief Scientist to provide intellectual leadership.
Between 2008 through 2011, he spun out Captual Technologies Inc. to develop and market ePresence, the first official open source software release from the University of Toronto, worldwide. In 2011, the company was sold to Canadian courseware management solutions firm Desire2Learn.
Following his project to develop context-aware mobile communications apps to aid speaking by adults with communication disorders such as caused by strokes, or children with learning and communication challenges such as autism spectrum disorder, Dr. Baecker assisted the start-up of MyVoice Inc. to commercialize an extraordinary context-aware mobile speech aid app. The company was doing well, until it was abandoned by its CEO in 2016.
Since 2010, Dr. Baecker has been transforming his work on aids for socially isolated and lonely seniors to keep them connected to family and friends into Famili.net Communications, which is currently commercializing a novel tablet-based communications tool for older adults.
In 2021, Dr. Baecker established Computers and Society, a non-profit collective — a virtual community and resource site for learning and thinking ethically about computers and society. The mission of the virtual community is to facilitate multidisciplinary communication, learning, and thinking about computers and society and computer ethics issues. Dr. Baecker recognized that literature about this increasingly important area is widely scattered, and that multi-disciplinary collaboration and discussion among various perspectives are needed. The inaugural board is chaired by Dr. Baecker, with Ishtiaque Ahmed, Casey Fiesler, Brett Frischmann, Uma Kalkar, Nicholas Logler, C. Dianne Martin, Dan Shefet and Rebecca Wright as board members.
Teaching experience
Dr. Baecker is an active lecturer, and consultant on human-computer interaction and user interface design, user support, software visualization, multimedia, computer-supported cooperative work and learning, the Internet, entrepreneurship and strategic planning in the software industry, and the role of information technology in business.
Since 1972, Dr. Baecker has developed original courses in interactive computer graphics, human-computer interaction, user interface and computational media and knowledge media design, computer-supported cooperative work, computer literacy, and software entrepreneurship. In 1973, he co-founded the Dynamic Graphics Project at the University of Toronto, creating the first Canadian university group studying HCI and computer graphics. In addition, in 1990, he developed the first undergraduate human-computer interaction specialization within computer science at U of T.
Additionally, in 2004, he founded the TAGlab to support research and development of technologies to aid cognition, communication, and social interaction among seniors. Collaborators include individuals from Baycrest, Columbia Medical School, Sunnybrook Health Sciences Centre, and Toronto Rehabilitation Institute.
Publications and patents
Dr. Baecker has published over 200 papers and articles on HCI, UI design, software visualization, computer-supported cooperative work, and related topics. He has published two videos and authored or co-authored seven books. His past publications are:
Dr. Baecker has published over 200 papers and articles on HCI, UI design, software visualization, computer-supported cooperative work, and related topics. He has published two videos and authored or co-authored seven books. His past publications are:
“Readings in Human Computer Interaction: Towards the Year 2000” (Morgan Kaufmann Publishers, 1995)
“Readings in Groupware and Computer Supported Cooperative Work: Software to Facilitate Human-Human Collaboration” (Morgan Kaufmann Publishers, 1993)
“Human Factors and Typography for More Readable Programs” (Addison-Wesley Publishing, 1990)
“Readings in Human Computer Interaction: A Multidisciplinary Approach” (Morgan Kaufmann Publishers, 1978)
His fifth book, “Computers and Society: Modern Perspectives” (Oxford University Press, 2019), examines current and systemic issues among computers and society. Work on this book has launched him on another research project, which is a systematic examination of the attributes AI systems need to have in order to be trusted with the critical, often life-and-death decisions that are now being proposed for machine learning algorithms, which include uses in recruiting, medical diagnosis, child welfare, criminal justice, senior's care, driving, and warfare.
During the COVID-19 pandemic, Dr. Baecker co-authored The COVID-19 Solutions Guide in 2020, a resource that would guide people how to survive medically, emotionally, and financially at the height of the pandemic.
In 2021, Dr. Baecker independently published Digital Dreams Have Become Nightmares: What We Must Do, a current and lively illustrated introduction to computers and society and computer ethics topics.
In addition, Dr. Baecker is the co-owner of two patents, one for “Content-Based Depiction of Computer Icons” (1995) and another for “Method for Generating and Displaying Content-Based Depictions of Computer-Generated Objects” (1996).
Honors and awards
Lifetime Achievement Award from the Canadian Association of Computer Science/Association d’informatique Canadienne, the national organization of Canadian Computer Science Departments/Schools/Faculties, May 2015.
Given the 3rd Canadian Digital Media Pioneer Award, GRAND Network of Centres of Excellence, May 2013.
Elected as an ACM Fellow, November 2011.
Together with Alex Levy, Aakash Sahney, and Kevin Tonon, second place recipient of the 2011 University of Toronto Inventor of the Year Award in the Information and Computer Technology, Social Sciences and Humanities category, January 2011.
Awarded the 2007 Leadership Award of Merit from the Ontario Research and Innovation Optical Network (ORION) in June 2007.
Awarded the Canadian Human Computer Communications Society Achievement Award in May 2005.
Elected to the ACM SIGCHI CHI Academy in February 2005.
Named as one of 60 Pioneers in Computer Graphics by ACM SIGGRAPH, and honoured with a photographic collection exhibited at SIGGRAPH’98 and later at the Boston Computer Museum, July 1998.
Training of highly qualified persons
Dr. Baecker's students are or have been professors at:
University of Toronto
University of British Columbia
University of Alberta
University of Ontario Institute of Technology
Nipissing University
Georgia Institute of Technology
The Open University
Hong Kong University
National University of Singapore
Several community colleges.
Others are or have been researchers or professional staff at The National Research Council (Canada), Microsoft and Microsoft Research, Google, IBM, Oracle Corporation, Sun Microsystems, The University of Toronto, McMaster University, Xerox, Nynex, Intel, Nortel, SRI International, Pixar/Disney, Alias Research, ATI, Electronic Arts, Matrox, McGraw-Hill, T-Mobile, Amazon, Intuit, McKinsey Corporation, Mark Logic, Silk Road Technology, Caseware international, Artez Interactive, Altamont Computers, Nectarine Group, Sapient, and ISS (Singapore). Others have started or been instrumental in the growth of companies such as SideFX, Data Mirror, Inea, Viigo, and TokBox.
References
American computer scientists
Canadian computer scientists
1942 births
Living people
Fellows of the Association for Computing Machinery
MIT Department of Physics alumni
University of Toronto faculty
University of Maryland, College Park faculty
Columbia University staff
Massachusetts Institute of Technology staff
American emigrants to Canada
American expatriates in Germany |
29513 | https://en.wikipedia.org/wiki/Simula | Simula | Simula is the name of two simulation programming languages, Simula I and Simula 67, developed in the 1960s at the Norwegian Computing Center in Oslo, by Ole-Johan Dahl and Kristen Nygaard. Syntactically, it is an approximate superset of ALGOL 60, and was also influenced by the design of Simscript.
Simula 67 introduced objects, classes, inheritance and subclasses, virtual procedures, coroutines, and discrete event simulation, and featured garbage collection. Other forms of subtyping (besides inheriting subclasses) were introduced in Simula derivatives.
Simula is considered the first object-oriented programming language. As its name suggests, the first Simula version by 1962 was designed for doing simulations; Simula 67 though was designed to be a general-purpose programming language and provided the framework for many of the features of object-oriented languages today.
Simula has been used in a wide range of applications such as simulating very-large-scale integration (VLSI) designs, process modeling, communication protocols, algorithms, and other applications such as typesetting, computer graphics, and education. The influence of Simula is often understated, and Simula-type objects are reimplemented in C++, Object Pascal, Java, C#, and many other languages. Computer scientists such as Bjarne Stroustrup, creator of C++, and James Gosling, creator of Java, have acknowledged Simula as a major influence.
History
The following account is based on Jan Rune Holmevik's historical essay.
Kristen Nygaard started writing computer simulation programs in 1957. Nygaard saw a need for a better way to describe the heterogeneity and the operation of a system. To go further with his ideas on a formal computer language for describing a system, Nygaard realized that he needed someone with more computer programming skills than he had. Ole-Johan Dahl joined him on his work January 1962. The decision of linking the language up to ALGOL 60 was made shortly after. By May 1962, the main concepts for a simulation language were set. SIMULA I was born, a special purpose programming language for simulating discrete event systems.
Kristen Nygaard was invited to visit the Eckert–Mauchly Computer Corporation late May 1962 in connection with the marketing of their new UNIVAC 1107 computer. At that visit, Nygaard presented the ideas of Simula to Robert Bemer, the director of systems programming at Univac. Bemer was a great ALGOL fan and found the Simula project compelling. Bemer was also chairperson of a session at the second international conference on information processing hosted by International Federation for Information Processing (IFIP). He invited Nygaard, who presented the paper "SIMULA – An Extension of ALGOL to the Description of Discrete-Event Networks".
The Norwegian Computing Center got a UNIVAC 1107 in August 1963 at a considerable discount, on which Dahl implemented the SIMULA I under contract with UNIVAC. The implementation was based on the UNIVAC ALGOL 60 compiler. SIMULA I was fully operational on the UNIVAC 1107 by January 1965. In the following few years, Dahl and Nygaard spent a lot of time teaching Simula. Simula spread to several countries around the world and SIMULA I was later implemented on other computers including the Burroughs B5500 and the Russian Ural-16.
In 1966 C. A. R. Hoare introduced the concept of record class construct, which Dahl and Nygaard extended with the concept of prefixing and other features to meet their requirements for a generalized process concept. Dahl and Nygaard presented their paper on Class and Subclass declarations at the IFIP Working Conference on simulation languages in Oslo, May 1967. This paper became the first formal definition of Simula 67. In June 1967, a conference was held to standardize the language and initiate a number of implementations. Dahl proposed to unify the type and the class concept. This led to serious discussions, and the proposal was rejected by the board. Simula 67 was formally standardized on the first meeting of the Simula Standards Group (SSG) in February 1968.
Simula was influential in the development of Smalltalk and later object-oriented programming languages. It also helped inspire the actor model of concurrent computation although Simula only supports coroutines and not true concurrency.
In the late sixties and the early seventies, there were four main implementations of Simula:
UNIVAC 1100 by Norwegian Computing Center (NCC)
System/360 and System/370 by NCC
CDC 3000 by University of Oslo's Joint Computer Installation at Kjeller
TOPS-10 by Swedish National Defence Research Institute (FOA)
These implementations were ported to a wide range of platforms. The TOPS-10 implemented the concept of public, protected, and private member variables and procedures, that later was integrated into Simula 87. Simula 87 is the latest standard and is ported to a wide range of platforms. There are mainly four implementations:
Simula AS
Lund Simula
GNU Cim
Portable Simula Revisited
In November 2001, Dahl and Nygaard were awarded the IEEE John von Neumann Medal by the Institute of Electrical and Electronics Engineers "For the introduction of the concepts underlying object-oriented programming through the design and implementation of SIMULA 67". In April 2002, they received the 2001 A. M. Turing Award by the Association for Computing Machinery (ACM), with the citation: "For ideas fundamental to the emergence of object oriented programming, through their design of the programming languages Simula I and Simula 67." Dahl and Nygaard died in June and August of that year, respectively, before the ACM Turing Award Lecture that was scheduled to be delivered at the November 2002 OOPSLA conference in Seattle.
Simula Research Laboratory is a research institute named after the Simula language, and Nygaard held a part-time position there from the opening in 2001. The new Computer Science building at the University of Oslo is named Ole Johan Dahl's House, in Dahl's honour, and the main auditorium is named Simula.
Sample code
Minimal program
The empty computer file is the minimal program in Simula, measured by the size of the source code. It consists of one thing only; a dummy statement.
However, the minimal program is more conveniently represented as an empty block:
Begin
End;
It begins executing and immediately terminates. The language lacks any return value from the program.
Classic Hello world
An example of a Hello world program in Simula:
Begin
OutText ("Hello, World!");
Outimage;
End;
Simula is case-insensitive.
Classes, subclasses and virtual procedures
A more realistic example with use of classes, subclasses and virtual procedures:
Begin
Class Glyph;
Virtual: Procedure print Is Procedure print;;
Begin
End;
Glyph Class Char (c);
Character c;
Begin
Procedure print;
OutChar(c);
End;
Glyph Class Line (elements);
Ref (Glyph) Array elements;
Begin
Procedure print;
Begin
Integer i;
For i:= 1 Step 1 Until UpperBound (elements, 1) Do
elements (i).print;
OutImage;
End;
End;
Ref (Glyph) rg;
Ref (Glyph) Array rgs (1 : 4);
! Main program;
rgs (1):- New Char ('A');
rgs (2):- New Char ('b');
rgs (3):- New Char ('b');
rgs (4):- New Char ('a');
rg:- New Line (rgs);
rg.print;
End;
The above example has one super class (Glyph) with two subclasses (Char and Line). There is one virtual procedure with two implementations. The execution starts by executing the main program. Simula lacks the concept of abstract classes, since classes with pure virtual procedures can be instantiated. This means that in the above example, all classes can be instantiated. Calling a pure virtual procedure will however produce a run-time error.
Call by name
Simula supports call by name so the Jensen's Device can easily be implemented. However, the default transmission mode for simple parameter is call by value, contrary to ALGOL which used call by name. The source code for the Jensen's Device must therefore specify call by name for the parameters when compiled by a Simula compiler.
Another much simpler example is the summation function which can be implemented as follows:
Real Procedure Sigma (k, m, n, u);
Name k, u;
Integer k, m, n; Real u;
Begin
Real s;
k:= m;
While k <= n Do Begin s:= s + u; k:= k + 1; End;
Sigma:= s;
End;
The above code uses call by name for the controlling variable (k) and the expression (u).
This allows the controlling variable to be used in the expression.
Note that the Simula standard allows for certain restrictions on the controlling variable
in a for loop. The above code therefore uses a while loop for maximum portability.
The following:
can then be implemented as follows:
Z:= Sigma (i, 1, 100, 1 / (i + a) ** 2);
Simulation
Simula includes a simulation package for doing discrete event simulations. This simulation package is based on Simula's object-oriented features and its coroutine concept.
Sam, Sally, and Andy are shopping for clothes. They must share one fitting room. Each one of them is browsing the store for about 12 minutes and then uses the fitting room exclusively for about three minutes, each following a normal distribution. A simulation of their fitting room experience is as follows:
Simulation Begin
Class FittingRoom; Begin
Ref (Head) door;
Boolean inUse;
Procedure request; Begin
If inUse Then Begin
Wait (door);
door.First.Out;
End;
inUse:= True;
End;
Procedure leave; Begin
inUse:= False;
Activate door.First;
End;
door:- New Head;
End;
Procedure report (message); Text message; Begin
OutFix (Time, 2, 0); OutText (": " & message); OutImage;
End;
Process Class Person (pname); Text pname; Begin
While True Do Begin
Hold (Normal (12, 4, u));
report (pname & " is requesting the fitting room");
fittingroom1.request;
report (pname & " has entered the fitting room");
Hold (Normal (3, 1, u));
fittingroom1.leave;
report (pname & " has left the fitting room");
End;
End;
Integer u;
Ref (FittingRoom) fittingRoom1;
fittingRoom1:- New FittingRoom;
Activate New Person ("Sam");
Activate New Person ("Sally");
Activate New Person ("Andy");
Hold (100);
End;
The main block is prefixed with Simulation for enabling simulation. The simulation package can be used on any block and simulations can even be nested when simulating someone doing simulations.
The fitting room object uses a queue (door) for getting access to the fitting room. When someone requests the fitting room and it's in use they must wait in this queue (Wait (door)). When someone leaves the fitting room the first one (if any) is released from the queue (Activate door.first) and accordingly removed from the door queue (door.First.Out).
Person is a subclass of Process and its activity is described using hold (time for browsing the store and time spent in the fitting room) and calls procedures in the fitting room object for requesting and leaving the fitting room.
The main program creates all the objects and activates all the person objects to put them into the event queue. The main program holds for 100 minutes of simulated time before the program terminates.
See also
BETA (programming language), a modern successor to Simula
Notes
Sources
Further reading
External links
(last working version at archive.org, accessed 2022-02-26)
ALGOL 60 dialect
Class-based programming languages
Norwegian inventions
Programming languages created in 1962
Science and technology in Norway
Simulation programming languages
Programming languages |
79823 | https://en.wikipedia.org/wiki/Real%20mode | Real mode | Real mode, also called real address mode, is an operating mode of all x86-compatible CPUs. The mode gets its name from the fact that addresses in real mode always correspond to real locations in memory. Real mode is characterized by a 20-bit segmented memory address space (giving exactly 1 MB of addressable memory) and unlimited direct software access to all addressable memory, I/O addresses and peripheral hardware. Real mode provides no support for memory protection, multitasking, or code privilege levels.
Before the release of the 80286, which introduced protected mode, real mode was the only available mode for x86 CPUs; and for backward compatibility, all x86 CPUs start in real mode when reset, though it is possible to emulate real mode on other systems when starting on other modes.
History
The 286 architecture introduced protected mode, allowing for (among other things) hardware-level memory protection. Using these new features, however, required a new operating system that was specifically designed for protected mode. Since a primary design specification of x86 microprocessors is that they are fully backward compatible with software written for all x86 chips before them, the 286 chip was made to start in 'real mode' – that is, in a mode which turned off the new memory protection features, so that it could run operating systems written for the 8086 and the 8088. As of 2018, current x86 CPUs (including x86-64 CPUs) are able to boot real mode operating systems and can run software written for almost any previous x86 chip without emulation or virtualization.
The PC BIOS which IBM introduced operates in real mode, as do the DOS operating systems (MS-DOS, DR-DOS, etc.). Early versions of Microsoft Windows ran in real mode. Windows/386 made it possible to make some use of protected mode, and this was more fully realized in Windows 3.0, which could run in either real mode or make use of protected mode in the manner of Windows/386. Windows 3.0 actually had several modes: "real mode", "standard mode" and "386-enhanced mode", the latter required some of the virtualization features of the 80386 processor, and thus would not run on an 80286. Windows 3.1 removed support for real mode, and it was the first mainstream operating environment which required at least an 80286 processor. None of these versions could be considered a modern x86 operating system, since they switched to protected mode only for certain functions. Unix, Linux, OS/2, Windows NT 3.x, and later Windows NT, etc. are considered modern OS's as they switch the CPU into protected mode at startup, never return to real mode and provide all of the benefits of protected mode all of the time. 64-bit operating systems use real mode only at startup stage, and the OS kernel will switch the CPU into long mode. It is worth noting that the protected mode of the 80286 is considerably more primitive than the improved protected mode introduced with the 80386; the latter is sometimes called 386 protected mode, and is the mode modern 32-bit x86 operating systems run in.
Addressing capacity
The 8086, 8088, and 80186 have a 20-bit address bus, but the unusual segmented addressing scheme Intel chose for these processors actually produces effective addresses which can have 21 significant bits. This scheme shifts a 16-bit segment number left four bits (making a 20-bit number with four least-significant zeros) before adding to it a 16-bit address offset; the maximum sum occurs when both the segment and offset are 0xFFFF, yielding 0xFFFF0 + 0xFFFF = 0x10FFEF. On the 8086, 8088, and 80186, the result of an effective address that overflows 20 bits is that the address "wraps around" to the zero end of the address range, i.e. it is taken modulo 2^20 (2^20 = 1048576 = 0x100000). However, the 80286 has 24 address bits and computes effective addresses to 24 bits even in real mode. Therefore, for the segment 0xFFFF and offset greater than 0x000F, the 80286 would actually make an access into the beginning of the second megabyte of memory, whereas the 80186 and earlier would access an address equal to [offset]-0x10, which is at the beginning of the first megabyte. (Note that on the 80186 and earlier, the first kilobyte of the address space, starting at address 0, is the permanent, immovable location of the interrupt vector table.) So, the actual amount of memory addressable by the 80286 and later x86 CPUs in real mode is 1 MB + 64 KB – 16 B = 1,114,096 B.
A20 line
Some programs predating the 80286 were designed to take advantage of the wrap-around (modulo) memory addressing behavior, so the 80286 presented a problem for backward compatibility. Forcing the 21st address line (the actual logic signal wire coming out of the chip) to a logic low, representing a zero, results in a modulo-2^20 effect to match the earlier processors' address arithmetic, but the 80286 has no internal capability to perform this function. When IBM used the 80286 in their IBM PC/AT, they solved this problem by including a software-settable gate to enable or disable (force to zero) the A20 address line, between the A20 pin on the 80286 and the system bus; this is known as Gate-A20 (the A20 gate), and it is still implemented in PC chipsets to this day. Most versions of the HIMEM.SYS extended memory driver for IBM-/MS-DOS famously displayed upon loading a message that they had installed an "A20 handler", a piece of software to control Gate-A20 and coordinate it to the needs of programs. In protected mode the A20 line needs to be enabled, or else physical addressing errors will occur, likely leading to a system crash. Modern legacy boot loaders (such as GNU GRUB) use A20 line.
Switching to real mode
Intel introduced protected mode into the x86 family with the intention that operating systems which used it would run entirely in the new mode and that all programs running under a protected mode operating system would run in protected mode as well. Because of the substantial differences between real mode and even the rather limited 286 protected mode, programs written for real mode cannot run in protected mode without being rewritten. Therefore, with a wide base of existing real mode applications which users depended on, abandoning real mode posed problems for the industry, and programmers sought a way to switch between the modes at will. However, Intel, consistent with their intentions for the processor's usage, provided an easy way to switch into protected mode on the 80286 but no easy way to switch back to real mode. Before the 386 the only way to switch from protected mode back to real mode was to reset the processor; after a reset it always starts up in real mode to be compatible with earlier x86 CPUs back to the 8086. Resetting the processor does not clear the system's RAM, so this, while awkward and inefficient, is actually feasible. From protected mode, the processor's state is saved in memory, then the processor is reset, restarts in real mode, and executes some real mode code to restore the saved state from memory. It can then run other real mode code until the program is ready to switch back to protected mode. The switch to real mode is costly in terms of time, but this technique allows protected mode programs to use services such as BIOS, which runs entirely in real mode (having been designed originally for the 8088-based IBM Personal Computer model (machine type) 5150). This mode-switching technique is also the one used by DPMI (under real, not emulated, DOS) and DOS extenders like DOS/4GW to allow protected mode programs to run under DOS; the DPMI system or DOS extender switches to real mode to invoke DOS or BIOS calls, then switches back to return to the application program which runs in protected mode.
Decline
The changing towards the NT kernel resulted in the operating system not needing DOS to boot the computer as well as being unable to use it. The need to restart the computer in real mode MS-DOS declined after Windows 3.1x until it was no longer supported in Windows ME. The only way of currently running DOS applications that require real mode from within newer versions of Windows is by using emulators such as DOSBox or x86 virtualization products.
See also
Unreal mode
Protected Mode
Boot loader
80386
IA-32
x86 assembly language
Conventional memory
References
External links
X86 operating modes
Programming language implementation |
17563020 | https://en.wikipedia.org/wiki/International%20Multilateral%20Partnership%20Against%20Cyber%20Threats | International Multilateral Partnership Against Cyber Threats | The International Multilateral Partnership Against Cyber Threats (IMPACT) is the first United Nations-backed cybersecurity alliance. Since 2011, IMPACT serves as a key partner of the United Nations' (UN) specialised agency for ICTs - the International Telecommunication Union (ITU).
Being the first comprehensive public-private partnership against cyber threats, IMPACT serves as a politically neutral global platform that brings together governments of the world, industry and academia to enhance the global community's capabilities in dealing with cyber threats. With a total of 152 countries now formally part of the ITU-IMPACT coalition, and with strong support from industry giants, partners from academia and international organizations, IMPACT is the largest cybersecurity alliance of its kind.
Headquartered in Cyberjaya, Malaysia, IMPACT is the operational home of ITU's Global Cybersecurity Agenda (GCA). IMPACT offers ITU's Member States with access to expertise, facilities and resources to effectively address cyber threats, as well as assisting United Nations agencies in protecting their ICT infrastructures.
The IMPACT initiative was first announced by the fifth Prime Minister of Malaysia during the closing ceremony of the 15th World Congress on Information Technology (WCIT) 2006, held in the Austin, Texas, United States.
Initially IMPACT was known as the 'International Multilateral Partnership Against Cyber-Terrorism'. In 2008, following feedback from member governments and also from IMPACT's International Advisory Board (IAB) during IMPACT's official launch at the World Cyber Security Summit 2008 (WCSS), the words 'Cyber Terrorism' in IMPACT's name was changed to 'Cyber Threats' to reflect its wider cybersecurity role.
Facilities at the Global Headquarters
IMPACT's Global Headquarters was inaugurated on 20 May 2009. It was built on a 28,400 square metre site (seven-acre site) with a built-up area of over 5,400 square metres (58,000 square feet). Modelled after the Centers for Disease Control and Prevention (CDC) in Atlanta, United States, IMPACT operates a Global Response Centre (GRC). As the nerve centre of IMPACT, the GRC is fully equipped with a crisis room, IT and communications facilities, a fully functional Security Operations Centre (SOC), well-equipped data centre, on-site broadcasting centre and a VIP viewing gallery. The GRC is involved in securing the objectives of ITU's Global Cybersecurity Agenda (GCA) by placing the technical measures to combat newly evolved cyber threats.
Inauguration of the IMPACT Global Headquarters by the Prime Minister of Malaysia and the Secretary-General of the International Telecommunication Union
The IMPACT Global Headquarters was officially declared open on 20 May 2009 by the 5th Prime Minister of Malaysia, Tun Abdullah bin Ahmad Badawi, witnessed by the Prime Minister of Malaysia, Dato' Sri Mohd Najib bin Tun Abdul Razak and the Secretary-General of the ITU, Hamadoun Touré and IMPACT's Chairman Datuk Mohd Noor Amin
Through the GRC, IMPACT provides the global community with network early warnings system (NEWS), expert locator, team management, remediation, automated threat analysis system, trend libraries, visualisation of global threats, country-specific threats, incident and case management, trend monitoring and analysis, knowledge base, reporting, and resolution finder among others.
Collaboration with the ITU
IMPACT formally became a key partner of ITU - the United Nations’ (UN) specialised agency, following a Cooperation Agreement signed during the World Summit for Information Society 2011 (WSIS) Forum in Geneva, May 2011.
Under the Cooperation Agreement, IMPACT is tasked by ITU with the responsibility of providing cybersecurity assistance and support to ITU's 193 Member States and also to other organisations within the UN system. The Memorandum of Agreement was officially signed by Hamadoun Touré, the Secretary-General of ITU and Datuk Mohd Noor Amin, Chairman of IMPACT at the ITU's head office in Geneva. Founded in 1865, ITU is the oldest organisation within the UN system and functions as the UN's specialised agency for information and communication technologies.
IMPACT's involvement with ITU began in 2008 when it was named as the physical home of ITU's Global Cybersecurity Agenda (GCA). The GCA is an international cybersecurity framework that was formulated following deliberations by more than 100 leading experts worldwide. The GCA contains many recommendations, which when adopted and implemented, are intended to provide improved global cybersecurity. Through a Memorandum of Understanding inked in Bangkok back in 2008, IMPACT was named as the physical and operational home of the GCA.
In addition to this, during the 2011 WSIS Forum, a Memorandum of Understanding (MoU) was signed between ITU and the United Nations Office on Drugs and Crime (UNODC) which will see IMPACT supporting both organisations in their collaboration to assist UN member states to mitigate risks posed by cybercrime.
Partner countries of ITU-IMPACT are also given access to a host of specialised services including monitoring, analysis and alerts on cyber threats.
IMPACT's Global Response Centre (GRC) acts as a global cyber threat resource centre and provides emergency responses to facilitate identification of cyber threats and sharing of resources to assist ITU-UNODC Member States. IMPACT's Global Response Centre (GRC) collaborates with industry and academia, and hosts a comprehensive database on cyber threats. IMPACT's Electronically Secure Collaborative Application Platform for Experts (ESCAPE), is designed to connect those responsible for cybersecurity from over 140 countries. It also provides a response mechanism for ITU-IMPACT partner countries.
The other three divisions within IMPACT are Centre for Policy & International Cooperation, Centre for Training & Skills Development and Centre for Security Assurance & Research. These divisions provide consulting and training services, scholarships, reports and expertise to governments, industry and academia in partner countries.
International Advisory Board
International Advisory Board (IAB) members included:
Chairman: Blaise Compaoré, President of Burkina Faso. Prior to the appointment of the President of Burkina Faso, the Prime Minister of Malaysia held the position as Chairman of the IAB from its establishment (in 2008) to 2011.
Hamadoun Touré, Secretary-General of the ITU
Steve Chang, Founder & Chairman of Trend Micro
Eugene Kaspersky, Founder and Chief Executive Officer of Kaspersky Lab
Fred Piper, Cryptologist, Founder of the Information Security Group at Royal Holloway, University of London
Gilbert G. Noël Ouedraogo, Minister of Transport & Information Technology, Burkina Faso
Samuel Lesuron Poghisio, MP, Minister of Information and Communications, Kenya
M. Ali Abbasov, Minister of Communications & Information Technologies, Azerbaijan
Vujica Lazović, Minister of Information Society and Telecommunications, Montenegro
Salim Sultan Al Ruzaiqi, Chief Executive Officer, Information Technology Authority (ITA), Oman
Tim Unwin, Chief Executive Officer, Commonwealth Telecommunications Organisation
Tim Archdeacon, President & Chief Executive Officer, ABI Research
Abdou Diouf, Secretary-General, International Organisation of La Francophonie (OIF)
Angela Sinaswee-Gervais, Permanent Secretary, Trinidad & Tobago
Past members include Vinton G. Cerf, Vice President and Chief Internet Evangelist, Google, Howard Schmidt, the former White House Cyber Security Coordinator of the Obama and Bush Administrations, United States of America, Mikko Hyppönen, Chief Research Officer of F-Secure, John W. Thompson, former Chairman of the Board, Symantec Corporation and Ayman Hariri, Chairman of Oger Systems.
Under the advisory board, the management team consists of
Mohd Noor Amin, Chairman. Management Board,
Mohamed Shihab, Advisor (Technical),
Mohamed Zaini Bin Mazlan, Advisor (Administration),
Anuj Singh, Chief Operating Officer (COO),
Phillip Victor, Director of Policy and International Cooperation,
and Mohamad Sazly Musa, Director of Security Assurance.
Support
The Malaysian Government provided US$13m grant with a view to establish IMPACT's central headquarters equipped with the best facilities for international community.
F-Secure have contributed its expertise in establishing IMPACT's Global Response Centre – designed as the first line of defence against cyber threats.
Kaspersky Lab provided technical expertise in setting up IMPACT's Network Early Warning System (NEWS) in the Global Response Centre.
SANS Institute and EC-Council has contributed a grant of US$1m each to IMPACT to create scholarships schemes for developing nations that will help enhance and build capacity and capability in cybersecurity.
Symantec Corporation assisted IMPACT in establishing the "IMPACT Government Security Scorecard".
See also
International Telecommunication Union
United Nations Information and Communication Technologies Task Force
References
External links
Cybersecurity Gateway of the ITU
Official IMPACT's Facebook Page
Official IMPACT's Linkedin Page
Official IMPACT's Twitter Page
Speech by YAB Dato' Seri Ahmad Badawi, Prime Minister of Malaysia during WCIT 2008, Kuala Lumpur
Tackling Cyberthreats, Championing Cyberpeace
Training & Skills development
Centre for Security Assurance & Research
Centre for Policy & International Cooperation
Video clips
IMPACT's Official Launch Video
IMPACT's Official YouTube Page
CNBC's Interview with IMPACT Chairman on African Cybersecurity
Bloomberg's Interview with IMPACT Chairman in Singapore for their Asia Confidential Program
BBC World's Interview on IMPACT with the Head of the UN's International Telecommunication Union
Inauguration of IMPACT Global Headquarters
Computer security organizations
International Telecommunication Union
Internet governance organizations
Prime Minister's Department (Malaysia)
2008 establishments in Malaysia |
27385343 | https://en.wikipedia.org/wiki/Solar2D | Solar2D | Solar2D (formerly Corona SDK) is a free and open-source, cross-platform software development kit originally developed by Corona Labs Inc. and now maintained by Vlad Shcherban. Released in late 2009, it allows software programmers to build 2D mobile applications for iOS, Android, and Kindle, desktop applications for Windows, Linux and macOS, and connected TV applications for Apple TV, Fire TV and Android TV.
Solar2D uses integrated Lua layered on top of C++/OpenGL to build graphic applications. The software has two operational modes: the Solar2D Simulator and Solar2D Native. With the Solar2D Simulator, apps are built directly from the Solar2D Simulator. Solar2D Native allows you to integrate your Lua code and assets within an Xcode or Android Studio project to build your app and include native features.
History
Walter Luh and Carlos Icaza started Ansca Mobile, later renamed Corona Labs, after departing from Adobe in 2007. At Adobe, Luh was the lead architect working on the Flash Lite team and Icaza was the engineering manager responsible for mobile Flash authoring. In June 2009, Ansca released the first Corona SDK beta free for early adopters.
In December 2009, Ansca launched Corona SDK 1.0 for iPhone. The following February, the Corona SDK 1.1 was released with additional features.
In September 2010, Ansca released version 2.0 of Corona SDK and added Corona Game Edition. Version 2.0 added cross-platform support for iPad and Android, while Game Edition added a physics engine and other advanced features aimed specifically at game development.
In January 2011, Corona SDK was released for Windows XP and newer, giving developers the opportunity to build Android applications on PC.
In April 2012, co-founder and CEO Icaza left Ansca, and CTO Luh took the CEO role. Shortly after, in June 2012, Ansca changed its name to Corona Labs. In August 2012, Corona Labs announced Enterprise Edition, which added native bindings for Objective-C.
In March 2015, during GDC 2015 announcement was made that Corona SDK is completely free and will support Windows and Mac OS X deployment targets.
In November 2015, Corona Labs Inc. announced support for tvOS development for Apple TV.
In March 2017, Corona Labs was acquired by Appodeal and announced that the Enterprise version of Corona would also become free.
In June 2017, Corona Labs announced that Enterprise was renamed to Corona Native, is free for everyone and included as part of the core product."
In January 2019, Corona Labs announced that Corona 2D will be open sourced under the GNU GPLv3 license, while offering the option of a commercial license upon agreement with Corona Labs.
In April 2020, the engine was renamed from Corona SDK to Solar2D. This was done in response to the closure of Corona Labs, as well as the COVID-19 pandemic. Corona Labs also stopped offering commercial licenses and changed its open source license from GPLv3 to the more permissive MIT License.
Major features
Solar2D's API suite features API calls for audio and graphics, cryptography, networking and device information such as accelerometer information, GPS, and user input as well as widgets, particle effects, and more.
Bibliography
References
External links
Solar2d's official website
Corona Labs website
2009 software
Android (operating system) development software
Formerly proprietary software
Integrated development environments
IPhone video game engines
Lua (programming language)-scriptable game engines
MacOS programming tools
Mobile software
Mobile software programming tools
Video game development software
Software development kits
Software using the MIT license |
31859078 | https://en.wikipedia.org/wiki/Sony%20Computer%20Entertainment%2C%20Inc.%20v.%20Connectix%20Corp. | Sony Computer Entertainment, Inc. v. Connectix Corp. | Sony Computer Entertainment v. Connectix Corporation, 203 F.3d 596 (2000), is a decision by the Ninth Circuit Court of Appeals which ruled that the copying of a copyrighted BIOS software during the development of an emulator software does not constitute copyright infringement, but is covered by fair use. The court also ruled that Sony's PlayStation trademark had not been tarnished by Connectix Corp.'s sale of its emulator software, the Virtual Game Station.
Background of the case
In July 1998, Connectix started the development of the Virtual Game Station (VGS) as a Macintosh software application that emulates Sony's popular PlayStation video games console's hardware and firmware. This would make it possible for VGS users to play games developed for the PlayStation on Macintosh hardware, with plans to release a Windows PC compatible version at a later date. Connectix's development strategy was based upon reverse engineering the PlayStation's BIOS firmware, first by using the unchanged BIOS to develop emulation for the hardware, and then by developing a BIOS of their own using the original firmware as an aid for debugging. During the development work, Connectix contacted Sony, requesting "technical assistance" for completing the VGS, but this request was eventually declined in September 1998.
The Virtual Game Station development reached completion in December 1998, with the software being commercially released in the following month, January 1999. Sony perceived the VGS as a threat to its video game business, and filed a complaint alleging copyright infringement as well as violations of intellectual property against Connectix on January 27, 1999. Sony drew support from fellow video game hardware manufacturers Nintendo, Sega, and 3dfx Interactive, while Connectix was backed by fellow software firms and trade associations.
The district court awarded Sony an injunction blocking Connectix
from copying or using the Sony BIOS code in the development of the Virtual Game Station for Windows; and
from selling the Virtual Game Station for Macintosh or the Virtual Game Station for Windows.
The district court also impounded all of Connectix's copies of the Sony BIOS and all copies of works based upon or incorporating Sony BIOS. Connectix then successfully appealed the ruling, with the United States Courts of Appeals for the Ninth Circuit reversing the earlier decision.
The court's decision
The Ninth Circuit Court's 3-0 ruling centered on deciding whether or not Connectix's copying of the PlayStation firmware while reverse engineering it had been protected by fair use. The court relied heavily on the similar case between Sega Enterprises Ltd. v. Accolade Inc. in 1992, where the key finding relating to Connectix v. Sony was that copying for the purpose of reverse engineering was within fair use.
Each of the four components of fair use were considered by the court individually. The components are the nature of the copyrighted work, the amount and substantiality of the portion used, the purpose and character of the use and the effect of the use on the potential market.
1. Nature of the copyrighted work
While the Ninth District Court did acknowledge that software code does deserve copyright protection, the court, following the precedent of Sega vs. Accolade, deemed that the PlayStation firmware fell under a lowered degree of copyright protection because it contained unprotected parts (functional elements) that could not be examined without copying. The court also rejected the semantic distinction between "studying" and "use" made by the district court, finding it to be artificial. The court case states, "[T]hey disassembled Sony's code not just to study the concepts. They actually used that code in the development of [their] product."
2. Amount and substantiality of the portion used
The court saw this criterion as being of little significance to the case at hand. While Connectix did disassemble and copy the Sony BIOS repeatedly over the course of reverse engineering, the final product of the Virtual Game Station contained no infringing material. As a result, "this factor [held] ... very little weight." in determining the decision.
3. Purpose and character of the use
Sony had argued that Connectix infringed Sony's copyright by making numerous intermediate copies (that is, copies of copyrighted computer code created to aid the development of a non-infringing product) of the PlayStation BIOS during the reverse engineering process. The court rejected this notion, ruling that such a copy-grounded basis for what qualified as fair use would result in software engineers choosing inefficient engineering methods that minimized the number of intermediate copies. Preventing such "wasted effort", they argued, was the very purpose of fair use.
In addition, the court found that the ultimate purpose and character of Connectix's use of Sony's BIOS - in that it created a new platform for Sony PlayStation games - qualified as "modestly transformative." This factor of fair use, therefore, lay in Connectix's favor.
4. Effect of the use upon the potential market
The court held in favor of Connectix on this point as well. While the Virtual Game Station might very well lower Sony's PlayStation console sales, its transformative status- allowing PlayStation games to be played on Mac - rendered it a legitimate competitor in the market for Sony and Sony-licensed games: "For this reason, some economic loss by Sony as a result of this competition does not compel a finding of no fair use. Sony understandably seeks control over the market for devices that play games Sony produces or licenses. The copyright law, however, does not confer such a monopoly."
The Ninth Circuit Court also reversed the district court's ruling that the Virtual Game Station tarnished Sony's "PlayStation" trademark. Sony had to show that (1) the PlayStation "mark is famous"; (2) Connectix is "making commercial use of the mark"; (3) Connectix's "use began after the mark became famous"; and that (4) Connectix's "use of the mark dilutes the quality of the mark by diminishing the capacity of the mark to identify and distinguish goods and services." As the first three points were not under debate (Connectix conceded points (1) and (3) ), the court addressed only the fourth point.
The court also took the opinion that the provided studies were lacking sufficient evidence of diluting the PlayStation trademark:
"The evidence here fails to show or suggest that Sony's mark or product was regarded or was likely to be regarded negatively because of its performance on Connectix's Virtual Game Station. The evidence is not even substantial on the quality of that performance. … Sony's tarnishment claim cannot support the injunction."
Conclusion and aftermath
The Ninth Circuit Court reversed the district court's decision both on the copyright infringement and the trademark tarnishing claims, lifting the injunction against Connectix. Connectix immediately filed a motion with the district court to summarily dismiss Sony's lawsuit. After a failed attempt by Sony to appeal the case to the Supreme Court, the two companies settled out of court about a year later. On March 15, 2001, Sony purchased the VGS rights from Connectix. They discontinued the product June 30 of that year. Connectix itself closed in August 2003.
Video game emulation advocates have asserted that Sony vs. Connectix established the legality of emulators within the United States.
See also
Sega v. Accolade
Connectix
Video game console emulator
References
United States Court of Appeals for the Ninth Circuit cases
Sony litigation
United States copyright case law
2000 in United States case law |
29834660 | https://en.wikipedia.org/wiki/Nettle%20%28cryptographic%20library%29 | Nettle (cryptographic library) | Nettle is a cryptographic library designed to fit easily in a wide range of toolkits and applications. It began as a collection of low-level cryptography functions from lsh in 2001. Since June 2009 (version 2.0) Nettle is a GNU package.
Features
Since version 3, nettle provides the AES block cipher (a subset of Rijndael) (with assembly optimizations for x86 and sparc), the ARCFOUR (also known as RC4) stream cipher (with x86 and sparc assembly), the ARCTWO (also known as RC2) stream cipher, BLOWFISH, CAMELLIA (with x86 and x86_64 assembly optimizations), CAST-128, DES and 3DES block ciphers, the ChaCha stream cipher (with assembly for x86_64), GOSTHASH94, the MD2, MD4, and MD5 (with x86 assembly) digests, the PBKDF2 key derivation function, the POLY1305 (with assembly for x86_64) and UMAC message authentication codes, RIPEMD160, the Salsa20 stream cipher (with assembly for x86_64 and ARM), the SERPENT block cipher (with assembly for x86_64), SHA-1 (with x86, x86_64 and ARM assembly), the SHA-2 (SHA-224, SHA-256, SHA-384, and SHA-512) digests, SHA-3 (a subset of the Keccak digest family), the TWOFISH block cipher, RSA, DSA and ECDSA public-key algorithms, the Yarrow pRNG. Version 3.1 introduced support for Curve25519 and EdDSA operations. The public-key algorithms use GMP.
Nettle is used by GnuTLS.
Licence and motivation
An API which fits one application well may not work well in a different context resulting in a proliferation of cryptographic libraries designed for particular applications. Nettle is an attempt to avoid this problem by doing one thing (the low-level cryptography) and providing a simple and general interface to it. In particular, Nettle doesn't do algorithm selection, memory allocation or any I/O. Thus Nettle is intended to provide a core cryptography library upon which numerous application and context specific interfaces can be built. The code, test cases, benchmarks, documentation, etc. of these interfaces can then be shared without having to replicate Nettle's cryptographic code.
Nettle is primarily licensed under a dual licence scheme comprising The GNU General Public License version 2 or later and The GNU Lesser General Public License version 3 or later. A few individual files are licensed under more permissive licences or in the public domain. The copyright notices at the top of the library's source files precisely define the licence status of particular files.
The Nettle manual "is in the public domain" and may be used and reproduced freely.
See also
Botan
Bouncy Castle
Cryptlib
Libgcrypt
Crypto++
Comparison of cryptography libraries
References
Cryptographic software
GNU Project software
Free security software
Free computer libraries
Assembly language software
Software using the LGPL license
Software using the GPL license
2020 software |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.