id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
42673281 | https://en.wikipedia.org/wiki/1992%20Troy%20State%20vs.%20DeVry%20men%27s%20basketball%20game | 1992 Troy State vs. DeVry men's basketball game | The 1992 Troy State vs. DeVry men's basketball game is the highest-scoring men's basketball game in National Collegiate Athletic Association (NCAA) history, regardless of division classification. On January 12, 1992, Troy State University, now known as Troy University, defeated DeVry University of Atlanta 258–141 in a game that is considered to have established several unbreakable records.
Background
During the 1991–92 college basketball season, Troy State was playing its next to last year as an NCAA Division II school before transitioning to Division I. They were led by head coach Don Maestri, whose unconventional offense-oriented system led to incredibly high-scoring games; that season, Troy State led all of Division II with a 121.0 points per game scoring average (while also giving up 107.8 per game). They attempted an NCAA-record 1,303 three-pointers in 1991–92 and scored on 444 of them. Maestri's philosophy was to unapologetically attempt steals on an opponent's every possession, and if they missed the steal, they allowed the opponent to score as long as they scored quickly. He substituted players regularly and knew that his track-meet style of pressure would eventually wear out the other team. Once tired, his Troy State squads would continue to relentlessly pursue opponents on the defensive end; on offense, no shot was considered a bad shot, and the quicker the attempt, the better. The high-octane offense used by Troy State was modeled after Paul Westhead's Loyola Marymount teams of the era; Maestri even had Westhead mail him Loyola Marymount game tapes to study the plays and methods used by the successful Division I school.
Heading into the match-up against DeVry University of Atlanta, the Trojans sported a 12–3 record while the DeVry Hoyas had a 3–15 record. DeVry was classified as a National Association of Intercollegiate Athletics (NAIA) Division II school; it struggled to get wins against comparably talented opponents, let alone high-octane, successful NCAA Division II schools. The previous season, Troy State had set the NCAA record for points in a game with 187—also against DeVry of Atlanta. Stacking the odds further against DeVry was that they had only seven players, thus any chance of resting and catching their breath during substitutions was minimal at best.
The game
After tip-off, Troy State scored their first basket after 54 seconds. Player Paul Bryan later said, jokingly, "It was a little touch and go there early." Despite their frenetic pace, the Trojans "only" had 15 points after the first three minutes. As the game settled into its soon-to-be record breaking pace, points came steadily; with 3:14 remaining in the first half, Troy eclipsed the 100-point mark. Guard Tommy Davis said, "When you see one guy hitting, then everybody gets in the act. It becomes contagious." At the end of the first half, the score was 123–53. They made 21 three-pointers in the first 20 minutes, and their 123 points had already broken their own NCAA single-half record from the year before (103), also set against DeVry.
Within the first three minutes of the second half, the Trojans scored 26 points and had already accumulated 149 overall with 17 minutes remaining. It was not until 6:35 into the second half that Troy State scored their first points of the half that were not three-pointers or dunks. With 10 minutes remaining, Chris Greasham's three-pointer gave Troy 189, eclipsing the previous NCAA single-game scoring record of 187. Then, with 7:53 to go, they surpassed the 200-point mark, becoming the first and only team in college basketball history to surpass this threshold. The scoreboard was not built to display 200-plus points, and so when the moment occurred, it did not display the numbers correctly (the scoreboard operator's solution was to start over at zero). During the second half alone, the Trojans scored 135 points, besting their minutes-old record of 123, and their 30 three-pointers in the second stanza was higher than the NCAA all-time full-game record of 25 (set previously by Troy). Their 51 made three-point field goals more than doubled that record, and their 109 three-point attempts record has never been seriously contested. Tommy Davis remarked that the game "reminded [him] of a street game you play in the summer." Jack Smith credited their home crowd to giving players the extra energy they needed to maintain the record-shattering pace: "It seems almost impossible to hit 200 points in a game. It's a great, great feeling. The crowd played a big part in us getting the record. Their hollering gave us the energy we needed."
For the game, 10 of the 11 Troy State players scored in double figures. Terry McCord, who the following season would be named an NCAA Division II All-American, led the team with 41 points on 16-for-26 shooting. The only player not to score in double figures was Andy Davis, who made the game's first basket and finished 1-for-1. Eight of the 11 Trojans scored at least 20 points, and of those, five scored at least 29. Smith recorded the game's only triple-double, with 29 points, 13 rebounds, and 11 assists. Of the many statistical anomalies to occur in this high-scoring game, one was that only three total free throws were attempted between the teams (Troy attempted, and made, all three). DeVry's Clayton Jones had 19 of the Hoyas' 44 turnovers by himself, while DeVry's Dartez Daniel scored a game-high 42 points on 20-for-30 shooting.
Box score
Aftermath
The January 12, 1992, game between Troy State and DeVry remains the highest-scoring single game in NCAA history. Seven statisticians worked for 57 minutes after the game ended to complete its box score. Among records considered unbreakable are total combined points (399), points by one team in one half (135), and three-pointers made and attempted (51/109) by one team in a single game. That season, Troy State compiled a 23–6 overall record while setting many school records along the way, including single-season scoring average (121.0), field goals made and attempted (1,274/2,839), three-pointers made and attempted (444/1,303), and steals (460). They also set single-game records for points (258), points in a half (135), field goals made and attempted (102/190), rebounds (94), assists (65), and total combined points for two teams in a single game. Troy State lost in the first round of the NCAA Division II Tournament. The Trojans have since transitioned to NCAA Division I, and the school changed its name to the current Troy University in 2005; while they have won six conference championships through the 2019–20 season, they have reached the NCAA Division I Tournament just twice. DeVry University, meanwhile, dropped its entire athletics program from its Atlanta campus in the 1990s. Its teams were rarely competitive, and the cost to maintain sports outweighed the returns. The basketball team finished the 1991–92 season with a 3–16 record.
2017 video analysis
On March 13, 2017, SB Nation's Jon Bois published a video in which he argued that the correct final score of the game should have been Troy State 253, DeVry 141. Relying on a single continuous recording of the game posted to YouTube, Bois counted all made baskets and arrived at 253 points for Troy State. He identified two potential scorer's errors: a Troy State dunk that went in after the horn had errantly blown that resulted in a return of the ball to Troy State, and an attempted three-point basket that ended with the ball lodged between the backboard and rim.
See also
Grinnell System – a fast-tempo style of basketball developed by coach Dave Arseneault at Grinnell College in Grinnell, Iowa, similar to the tactics used by Don Maestri at Troy
Nellie Ball – created by NBA head coach Don Nelson, it is a fast-paced run-and-gun offense relying on smaller, more athletic players who can create mismatches by outrunning their opponents
Footnotes
The official box score lists Greasham's total points as 29. However, the number of two- and three-point field goals he tallied gives him 20 total points. Twenty points also correctly sums the team's total points to 258.
References
External links
YouTube video of entire game
1991–92 NCAA Division II men's basketball season
College basketball games in the United States
Troy Trojans men's basketball
DeVry University
January 1992 sports events in the United States
1992 in sports in Alabama |
18667798 | https://en.wikipedia.org/wiki/Pixel%20artist | Pixel artist | A pixel artist is a graphic designer who specializes in computer art and can refer to a number of artistic and professional disciplines which focus on visual communication and presentation. Similar to chromoluminarism used in the pointillism style of painting, in which small distinct points of primary colors create the impression of a wide selection of secondary and intermediate colors, a pixel artist works with pixels, the smallest piece of information in an image. The technique relies on the perceptive ability of the eye and mind of the viewer to mix the color spots into a fuller range of tones. Pixel art is often utilitarian and anonymous. Pixel design can refer to both the process (designing) by which the communication is created and the products (designs) which are generated.
Common uses of pixel design include print and broadcast media, web design and games. For example, a product package might include a logo or other artwork, organized text and pure design elements such as shapes and color which unify the piece. Composition is one of the most important features of design especially when utilizing pre-existing materials or using diverse elements. Pixel artists can also be a specialist in computer animation such as Computer Animation Production System users in post production of animated films and rendering (computer graphics) images like raster graphics.
In the 2000s, pixel artists such as Tyler West, Stephane Martiniere and Daniel Dociu have gained international artistic recognition, due in part to the popularity of computer and video games. For instance the E3 Media and Business Summit, an annual trade show for the computer and video games industry, has a concurrent juried art show, "Into the Pixel" starting in 2003. Jurist and Getty Research Institute curator Louis Marchesano noted that most of the works were concept pieces used in the development of games.
Pixel artists are also used in digital forensics, an emerging field, to both create and detect fraud in all forms of media including "the courts, politics and scientific journals". For instance, the Federal Office of Research Integrity has said that the percent of allegations of fraud they investigated involved contested images has risen from less than 3 in 1990 to 44.1 percent in 2006.
History
Computer art started in 1960s and by its nature is evolutionary since changes in technology and software directly affect what is possible. The term pixel art was first published by Adele Goldberg and Robert Flegal of Xerox Palo Alto Research Center in 1982. Adobe Systems, founded in 1982, developed the Postscript language and digital fonts, making drawing painting and image manipulation software popular. Adobe Illustrator, a vector drawing program based on the Bezier curve, was introduced in 1987 and Adobe Photoshop followed in 1990. Adobe Flash, a popular set of multimedia software used to add animation and interactivity to web pages, was introduced in 1996.
In the 2000s, pixel artists have been employed increasingly in video games and, to a lesser extent in music videos. These artists have remained somewhat underground until the mid-2000s. In 2006, Röyksopp released "Remind Me", illustrated completely by pixel art, which the New York Times cited as amazing and hypnotic. (See video here .)
Background
A pixel artist is one of the new media artists that employs technology while also utilizing traditional media and art forms. They may have a fine arts background such as photography, painting or drawing but self-taught designers and artists are also able to accomplish this work. They are often required to employ imaging and a full range of artistic and technological skills including those of conceptual artists.
In digital imaging, a pixel (picture element) is the smallest piece of information in an image.
The word pixel is based on a contraction of pix (for "pictures") and el (for "element"); similar formations with el for "element" include voxel, luxel, and texel. Pixels are normally arranged in a regular 2-dimensional grid, and are often represented using dots, squares, or rectangles. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three or four components such as red, green, and blue, or cyan, magenta, yellow, and black.
Neuroplasticity is a key element of observing many pixel images. While two individuals will observe the same photons reflecting off a photorealistic image and hitting their retinas, someone whose mind has been primed with the theory of pointillism may see a very different image as the image is interpreted in the visual cortex.
Techniques
The total number of pixels (image resolution), and the amount of information in each pixel (often called color depth) determine the quality of an image. For example, an image that stores 24 bits of color-information per pixel (the standard for computer displays since around 1995) can represent smoother degrees of shading than one that only stores 16 bits per pixel, but not as smooth as one that stores 48 bits. Likewise, an image sampled at 640 x 480 pixels (and therefore containing 307,200 pixels) will look rough and blocky compared to one sampled at 1280 x 1024 (1,310,720 pixels). Because it takes a large amount of data to store a high-quality image, computer software often uses data compression techniques to reduce this size for images stored on disk. Some techniques sacrifice information, and therefore image quality, in order to achieve a smaller file-size. Computer scientists refer to compression techniques that lose information as lossy compression.
Modern computer-monitors typically display about 72 to 130 pixels per inch (PPI), and some modern consumer printers can resolve 2400 dots per inch (DPI) or more; determining the most appropriate image resolution for a given printer-resolution can pose difficulties, since printed output may have a greater level of detail than a viewer can discern on a monitor. Typically, a resolution of 150 to 300 pixel per inch works well for 4-color process (CMYK) printing.
Drawings usually start with what is called the line art, which is the basic line that defines the item the artist intends to create. Line arts can be either traced over scanned drawings or hand drawn on the computer itself by the use of a mouse or a graphics tablet and are often shared among other pixel artists in diverse websites in order to receive some feedback. Other techniques, some resembling painting, also exist, such as knowledge of the color theory. The limited palette often implemented into pixel art usually promotes the use of dithering in order to achieve different shades and colors (when necessary); hand-made anti-aliasing is also used for smoother purposes. A pixel artist will exponentially increase the zoom of whatever they are working on to make adjustments as needed and then view the results until desired changes are achieved.
See also
Pixel art
Demoscene
Digital illustration
Digital painting
Fixed pixel display
Onion skinning
Pixel art scaling algorithms
Quantization error
Spriting
Invader (artist)
References
Further reading
Dabbs, Alistair - Interface Design; Watson-Guptill (2002); , .
NFGMan - Character Design for Mobile Devices; Focal Press (2006); , .
Raimes, Jonathan; Malcolm Garrett - The Digital Canvas; Abrams publisher (2006).
Animation techniques
Artistic techniques
Computer graphic techniques
Design
Design occupations
Digital art
Visual arts occupations
Art occupations |
54229175 | https://en.wikipedia.org/wiki/MacOS%20High%20Sierra | MacOS High Sierra | macOS High Sierra (version 10.13) is the fourteenth major release of macOS, Apple Inc.'s desktop operating system for Macintosh computers. macOS High Sierra was announced at the WWDC 2017 on June 5, 2017 and was released on September 25, 2017. The name "High Sierra" refers to the High Sierra region in California. Following on from macOS Sierra, its iterative name also alludes to its status as a refinement of its predecessor, focused on performance improvements and technical updates rather than user features. This makes it similar to previous macOS releases Snow Leopard, Mountain Lion and El Capitan. Among the apps with notable changes are Photos and Safari.
System requirements
macOS High Sierra is supported on the following Macintosh computers:
iMac: Late 2009 or later
MacBook: Late 2009 or later
MacBook Pro: Mid 2010 or later
MacBook Air: Late 2010 or later
Mac Mini: Mid 2010 or later
Mac Pro: Mid 2010 or later
macOS High Sierra requires at least 2 GB of RAM and 14.3 GB of available disk space.
It is possible to install High Sierra on many older Macintosh computers that are not officially supported by Apple. This requires using a patch to modify the install image.
Changes
System
Apple File System
Apple File System (APFS) replaces HFS Plus as the default file system in macOS for the first time with High Sierra. It supports 64‑bit inode numbers, is designed for flash memory, and is designed to speed up common tasks like duplicating a file and finding the size of a folder's contents. It also has built‑in encryption, crash‑safe protections, and simplified data backup on the go.
Metal 2
Metal, Apple's low-level graphics API, has been updated to Metal 2. It includes virtual-reality and machine-learning features, as well as support for external GPUs. The system's windowing system, Quartz Compositor, supports Metal 2.
Media
macOS High Sierra adds support for High Efficiency Video Coding (HEVC), with hardware acceleration where available, as well as support for High Efficiency Image File Format (HEIF). Macs with the Intel Kaby Lake processor offer hardware support for Main 10 profile 10-bit hardware decoding, those with the Intel Skylake processor support Main profile 8-bit hardware decoding, and those with AMD Radeon 400 series graphics also support full HEVC decoding. However, whenever an Intel IGP is present, the frameworks will only direct requests to Intel IGP. In addition, audio codecs FLAC and Opus are also supported, but not in iTunes.
HEVC hardware acceleration requires a Mac with a sixth-generation Intel processor or newer (late 2015 27-inch iMac, mid 2017 21.5-inch iMac, early 2016 MacBook, late 2016 MacBook Pro or iMac Pro).
Other
Kernel extensions ("kexts") will require explicit approval by the user before being able to run.
The Low Battery notification and its icon were replaced by a flatter modern look.
The time service ntpd was replaced with timed for the time synchronization.
The FTP and telnet command line programs were removed.
Caching Server, File Sharing Server, and Time Machine Server, features that were previously part of macOS Server, are now provided as part of the OS.
The screen can now be locked using the shortcut Cmd+Ctrl+Q. The ability to lock screen using a menu bar shortcut activated in Keychain Access preferences has now been removed.
The 10.13.4 update added support for external graphics processors for Macs equipped with Thunderbolt 3 ports. The update discontinued support for external graphics processors in 2015 or older Macs, equipped with Thunderbolt 1 and 2 ports.
Starting with 10.13.4, when a 32-bit app is opened, users get a one-time warning about its future incompatibility with the macOS operating system.
Applications
Final Cut Pro 7
Apple announced the original Final Cut Studio suite of programs will not work on High Sierra. Media professionals that depend on any of those programs were advised to create a double boot drive to their computer.
Photos
macOS High Sierra gives Photos an updated sidebar and new editing tools.
Photos synchronizes tagged People with iOS 11.
Mail
Mail has improved Spotlight search with Top Hits. Mail also uses 35% less storage space due to optimizations, and Mail's compose window can now be used in split-screen mode.
Safari
macOS High Sierra includes Safari 11. Safari 11 has a new "Intelligent Tracking Prevention" feature that uses machine learning to block third parties from tracking the user's actions. Safari can also block auto playing videos from playing. The "Reader Mode" can be set to always-on. Safari 11 also supports WebAssembly. The last version of Safari that High Sierra supports is 13.1.2. This version has known security issues.
Notes
The Notes app includes the ability to add tables to notes, and notes can be pinned to the top of the list. The version number was incremented to 4.5.
Siri
Siri now uses a more natural and expressive voice. It also uses machine learning to understand the user better. Siri synchronizes information across iOS and Mac devices so the Siri experience is the same regardless of the product being used.
Messages
The release of macOS High Sierra 10.13.5 (and iOS 11.4) introduced support for Messages in iCloud. This feature allows messages to sync across all devices using the same iCloud account. When messages are deleted they are deleted on each device as well, and messages stored in the cloud do not take up local storage on the device anymore. In order to use the feature, the user has to enable two-factor authentication for their Apple ID.
Other applications found on macOS 10.13 High Sierra
AirPort Utility
App Store
Archive Utility
Audio MIDI Setup
Automator
Bluetooth File Exchange
Boot Camp Assistant
Calculator
Calendar
Chess
ColorSync Utility)
Console
Contacts
Dictionary
Digital Color Meter
Disk Utility
DVD Player
FaceTime
Font Book
Game Center
GarageBand (may not be pre-installed)
Grab
Grapher
iBooks (now Apple Books)
iMovie (may not be pre-installed)
iTunes
Image Capture
Ink (can only be accessed by connecting a graphics tablet to your Mac)
Keychain Access
Keynote (may not be pre-installed)
Migration Assistant
Numbers (may not be pre-installed)
Pages (may not be pre-installed)
Photo Booth
Preview
QuickTime Player
Reminders
Script Editor
Stickies
System Information
Terminal
TextEdit
Time Machine
VoiceOver Utility
X11/XQuartz (may not be pre-installed)
Reception
In his September 2017 review of High Sierra, Roman Loyola, the senior editor of Macworld, gave it a provisionally positive review, calling it an "incremental update worthy of your time, eventually." Loyola expressed that the product's most significant draw was its security features, and that beyond this, the most beneficial changes lay in its future potential, saying it "doesn't have a lot of new features that will widen your eyes in excitement. But a lot of the changes are in the background and under the hood, and they lay a foundation for better things to come."
Problems
macOS High Sierra 10.13.0 and 10.13.1 have a critical vulnerability that allowed an attacker to become a root user by entering "root" as a username, and not entering a password, when logging in. This was fixed in the Security Update 2017-001 for macOS High Sierra v10.13.1.
When it was first launched, it was discovered that the WindowServer process had a memory leak, leading to much slower graphics performance and lagging animations, probably due to some last-minute changes in Metal 2. This was fixed in macOS 10.13.1.
macOS High Sierra 10.13.4 had an error that caused DisplayLink to stop working for external monitors, allowing only one monitor to be extended. When using two external monitors, they could only be mirrored. Alban Rampon, the Senior Product Manager for DisplayLink, stated on December 24, 2018 that the company was working with Apple to resolve the issue.
Release history
References
External links
– official site
macOS High Sierra download page at Apple
13
X86-64 operating systems
2017 software
Computer-related introductions in 2017 |
63569951 | https://en.wikipedia.org/wiki/Aarogya%20Setu | Aarogya Setu | Aarogya Setu (translation from Sanskrit: the bridge to health) is an Indian COVID–19 "contact tracing, syndromic mapping and self-assessment" digital service, primarily a mobile app, developed by the National Informatics Centre under the Ministry of Electronics and Information Technology (MeitY).
The app reached more than 100 million installs in 40 days. On 26 May, amid growing privacy and security concerns, the source code of the app was made public.
Full view
The stated purpose of this app is to spread awareness of COVID–19 and to connect essential COVID–19-related health services to the people of India. This app augments the initiatives of the Department of Health to contain COVID–19 and shares best practices and advisories. It is a tracking app which uses the smartphone's GPS and Bluetooth features to track COVID-19 cases. The app is available for Android and iOS mobile operating systems. With Bluetooth, it tries to determine the risk if one has been near (within six feet of) a COVID–19-infected person, by scanning through a database of known cases across India. Using location information, it determines whether the location one is in belongs to one of the infected areas based on the data available.
This app is an updated version of an earlier app called Corona Kavach (now discontinued) which was released earlier by the Government of India.
Features and tools
Aarogya Setu has four sections:
User Status (tells the risk of getting COVID-19 for the user)
Self Assess (helps the users identify COVID-19 symptoms and their risk profile)
COVID-19 Updates (gives updates on local and national COVID-19 cases)
E-pass integration (if applied for E-pass, it will be available)
See Recent Contacts option (allows the users to assess the risk level of their Bluetooth contacts)
It tells how many COVID-19 positive cases are likely in a radius of 500 m, 1 km, 2 km, 5 km and 10 km from the user.
The app is built on a platform that can provide an application programming interface (API) so that other computer programs, mobile applications, and web services can make use of the features and data available in Aarogya Setu.
Response
Aarogya Setu crossed five million downloads within three days of its launch, making it one of the most popular government apps in India. It became the world's fastest-growing mobile app, beating Pokémon Go, with more than 50 million installs 13 days after launching in India on 2 April 2020. It reached 100 million installs by 13 May 2020, that is in 40 days since its launch.
In an order on 29 April 2020 the central government made it mandatory for all employees to download the app and use it – "Before starting for office, they must review their status on Aarogya Setu and commute only when the app shows safe or low risk". The Union Home Ministry also said that the application is mandatory for all living in the COVID-19 containment zone. The government gave the announcement along with the nationwide lockdown extension by two weeks from the 4 May with certain relaxations.
On 21 May 2020, the Airport Authority of India issued a Standard Operating Procedure (SOP) stating that all departing passengers must compulsorily be registered with the Aarogya Setu app. It added that the app would not be mandatory for children below 14 years. However, the next day, Civil Aviation Minister Hardeep Singh Puri clarified that the app would not be mandatory for any passengers.
In March 2021, Co-WIN portal was integrated with the app. This allowed users to schedule an appointment through the app for COVID-19 vaccine by registering their phone number and providing relevant documents.
Effectiveness
NITI Aayog CEO revealed that "the app has been able to identify more than 3,000 hotspots in 3–17 days ahead of time."
Reception
Rahul Gandhi, leader of the Congress party, termed the Aarogya Setu application a "sophisticated surveillance system" after the government announced that downloading the app would be mandatory for both government and private employees. Following this, others raised the same concerns about the Aarogya Setu app. The Ministry of Electronics and Information Technology (MeitY) responded to these concerns by asserting that Gandhi's claims were false, and that the app was being appreciated internationally.
On 5 May, French ethical hacker Robert Baptiste, who goes by the name Elliot Alderson on Twitter, claimed that there were security issues with the app. The Indian government, as well as the app developers, responded to this claim by thanking the hacker for his attention, but dismissed his concerns. The developers of the app stated that the fetching of location data is a documented feature of the app, rather than a flaw, since the app is designed to track the distribution of the virus-infected population. They also asserted that no personal information of any user has been proven to be at risk.
On 6 May, Robert Baptiste tweeted that security vulnerabilities in Aarogya Setu allowed hackers to "know who is infected, unwell, [or] made a self assessment in the area of his choice". He also gave details of how many people were unwell and infected at the Prime Minister's Office, the Indian Parliament and the Home Office. The Economic Times pointed out that a clause in the app's Terms and Conditions stated that the user "agrees and acknowledges that the Government of India will not be liable for … any unauthorised access to your information or modification thereof". In response, several software developers called for the source code to be made public.
On 12 May, former Supreme Court Judge Justice B.N. Srikrishna termed the government's push mandating the use of Aarogya Setu app "utterly illegal". He said so far it is not backed by any law and questioned "under what law, government is mandating it on anyone".
MIT Technology Review gave 2 out of 5 stars to Aarogya Setu app after analyzing the COVID contact tracing apps launched in 25 countries. The app got stars only for the policy which suggests that data collected is deleted after a period of time and that the data collection, as far as user inputs go, is minimal. It also highlighted that India is the only democracy making its app mandatory for millions of people. The rating was further downgraded from 2 to 1 for collecting more information than the app needs to function.
Following this, the MeitY made the source code of the Android app public on GitHub on 26 May, which will be followed by iOS and API documentation. Further, the Government has also launched a "bug bounty program". This was done to "promote transparency and ensure security and integrity of the app". However, experts stated that the server-side code had not yet been publicly released, which meant that public opinion on security and privacy was yet to be completely assuaged. Following this, ZDNet noted that the source code seemed to confirm the government's claim that user location data, if collected, would be anonymised and would be deleted after 45 days, or 60 days for high-risk individuals.
See also
COVID-19 apps
Stranded in India
References
External links
Mobile applications
COVID-19 pandemic in India
2020 software
COVID-19 contact tracing apps
E-government in India |
38061077 | https://en.wikipedia.org/wiki/VP9 | VP9 | VP9 is an open and royalty-free video coding format developed by Google.
VP9 is the successor to VP8 and competes mainly with MPEG's High Efficiency Video Coding (HEVC/H.265).
At first, VP9 was mainly used on Google's video platform YouTube. The emergence of the Alliance for Open Media, and its support for the ongoing development of the successor AV1, of which Google is a part, led to growing interest in the format.
In contrast to HEVC, VP9 support is common among modern web browsers (see HTML5 video § Browser support). Android has supported VP9 since version 4.4 KitKat, while iOS/iPadOS added support for VP9 in iOS/iPadOS 14.
Parts of the format are covered by patents held by Google. The company grants free usage of its own related patents based on reciprocity, i.e. as long as the user does not engage in patent litigations.
History
VP9 is the last official iteration of the TrueMotion series of video formats that Google bought in 2010 for $134 million together with the company On2 Technologies that created it.
The development of VP9 started in the second half of 2011 under the development names of Next Gen Open Video (NGOV) and VP-Next. The design goals for VP9 included reducing the bit rate by 50% compared to VP8 while maintaining the same video quality, and aiming for better compression efficiency than the MPEG High Efficiency Video Coding (HEVC) standard. In June 2013 the "profile 0" of VP9 was finalized, and two months later Google's Chrome browser was released with support for VP9 video playback. In October of that year a native VP9 decoder was added to FFmpeg, and to Libav six weeks later. Mozilla added VP9 support to Firefox in March 2014. In 2014 Google added two high bit depth profiles: profile 2 and profile 3.
In 2013 an updated version of the WebM format was published, featuring support for VP9 together with Opus audio.
In March 2013, the MPEG Licensing Administration dropped an announced assertion of disputed patent claims against VP8 and its successors after the United States Department of Justice started to investigate whether it was acting to unfairly stifle competition.
Throughout, Google has worked with hardware vendors to get VP9 support into silicon. In January 2014, Ittiam, in collaboration with ARM and Google, demonstrated its VP9 decoder for ARM Cortex devices. Using GPGPU techniques, the decoder was capable of 1080p at 30fps on an Arndale Board. In early 2015 Nvidia announced VP9 support in its Tegra X1 SoC, and VeriSilicon announced VP9 Profile 2 support in its Hantro G2v2 decoder IP.
In April 2015 Google released a significant update to its libvpx library, with version 1.4.0 adding support for 10-bit and 12-bit bit depth, 4:2:2 and 4:4:4 chroma subsampling, and VP9 multithreaded decoding/encoding.
In December 2015, Netflix published a draft proposal for including VP9 video in an MP4 container with MPEG Common Encryption.
In January 2016, Ittiam demonstrated an OpenCL based VP9 encoder. The encoder is targeting ARM Mali mobile GPUs and was demonstrated on a Samsung Galaxy S6.
VP9 support was added to Microsoft's web browser Edge. It is present in development releases starting with EdgeHTML 14.14291 and due to be officially released in summer 2016.
In March 2017, Ittiam announced the completion of a project to enhance the encoding speed of libvpx. The speed improvement was said to be 50-70%, and the code "publicly available as part of libvpx".
Features
VP9 is customized for video resolutions greater than 1080p (such as UHD) and also enables lossless compression.
The VP9 format supports the following color spaces: Rec. 601, Rec. 709, Rec. 2020, SMPTE-170, SMPTE-240, and sRGB.
VP9 supports HDR video using hybrid log–gamma (HLG) and perceptual quantizer (PQ) transfer functions.
Efficiency
An early comparison that took varying encoding speed into account showed x265 to narrowly beat libvpx at the very highest quality (slowest encoding) whereas libvpx was superior at any other encoding speed, by SSIM.
In a subjective quality comparison conducted in 2014 featuring the reference encoders for HEVC (HM 15.0), MPEG-4 AVC/H.264 (JM 18.6), and VP9 (libvpx 1.2.0 with preliminary VP9 support), VP9, like H.264, required about two times the bitrate to reach video quality comparable to HEVC, while with synthetic imagery VP9 was close to HEVC.
By contrast, another subjective comparison from 2014 concluded that at higher quality settings HEVC and VP9 were tied at a 40 to 45% bitrate advantage over H.264.
Netflix, after a large test in August 2016, concluded that libvpx was 20% less efficient than x265, but by October the same year also found that tweaking encoding parameters could "reduce or even reverse gap between VP9 and HEVC". At NAB 2017, Netflix shared that they had switched to the EVE encoder, which according to their studies offered better two-pass rate control and was 8% more efficient than libvpx.
An offline encoder comparison between libvpx, two HEVC encoders and x264 in May 2017 by Jan Ozer of Streaming Media Magazine, with encoding parameters supplied or reviewed by each encoder vendor (Google, MulticoreWare and MainConcept respectively), and using Netflix's VMAF objective metric, concluded that "VP9 and both HEVC codecs produce very similar performance" and "Particularly at lower bitrates, both HEVC codecs and VP9 deliver substantially better performance than H.264".
Performance
An encoding speed versus efficiency comparison of the reference implementation in libvpx, x264 and x265 was made by an FFmpeg developer in September 2015: By SSIM index, libvpx was mostly superior to x264 across the range of comparable encoding speeds, but the main benefit was at the slower end of x264@veryslow (reaching a sweet spot of 30–40% bitrate improvement within twice as slow as this), whereas x265 only became competitive with libvpx around 10 times as slow as x264@veryslow. It was concluded that libvpx and x265 were both capable of the claimed 50% bitrate improvement over H.264, but only at 10–20 times the encoding time of x264. Judged by the objective quality metric VQM in early 2015, the VP9 reference encoder delivered video quality on par with the best HEVC implementations.
A decoder comparison by the same developer showed 10% faster decoding for ffvp9 than ffh264 for same-quality video, or "identical" at same bitrate. It also showed that the implementation can make a difference, concluding that "ffvp9 beats libvpx consistently by 25–50%".
Another decoder comparison indicated 10–40 percent higher CPU load than H.264 (but does not say whether this was with ffvp9 or libvpx), and that on mobile, the Ittiam demo player was about 40 percent faster than the Chrome browser at playing VP9.
Profiles
There are several variants of the VP9 format (known as "coding profiles"), which successively allow more features; profile 0 is the basic variant, requiring the least from a hardware implementation:
profile 0 color depth: 8 bit/sample, chroma subsampling: 4:2:0
profile 1 color depth: 8 bit, chroma subsampling: 4:2:2, 4:2:0, 4:4:4
profile 2 color depth: 10–12 bit, chroma subsampling: 4:2:0
profile 3 color depth: 10–12 bit, chroma subsampling: 4:2:2, 4:2:0, 4:4:4
Levels
VP9 offers the following 14 levels:
Technology
VP9 is a traditional block-based transform coding format. The bitstream format is relatively simple compared to formats that offer similar bitrate efficiency like HEVC.
VP9 has many design improvements compared to VP8. Its biggest improvement is support for the use of coding units of 64×64 pixels. This is especially useful with high-resolution video. Also, the prediction of motion vectors was improved. In addition to VP8's four modes (average/"DC", "true motion", horizontal, vertical), VP9 supports six oblique directions for linear extrapolation of pixels in intra-frame prediction.
New coding tools also include:
eighth-pixel precision for motion vectors,
three different switchable 8-tap subpixel interpolation filters,
improved selection of reference motion vectors,
improved coding of offsets of motion vectors to their reference,
improved entropy coding,
improved and adapted (to new block sizes) loop filtering,
the asymmetric discrete sine transform (ADST),
larger discrete cosine transforms (DCT, 16×16 and 32×32), and
improved segmentation of frames into areas with specific similarities (e.g. fore-/background)
In order to enable some parallel processing of frames, video frames can be split along coding unit boundaries into up to four rows of 256 to 4096 pixels wide evenly spaced tiles with each tile column coded independently. This is mandatory for video resolutions in excess of 4096 pixels. A tile header contains the tile size in bytes so decoders can skip ahead and decode each tile row in a separate thread. The image is then divided into coding units called superblocks of 64×64 pixels which are adaptively subpartitioned in a quadtree coding structure. They can be subdivided either horizontally or vertically or both; square (sub)units can be subdivided recursively down to 4×4 pixel blocks. Subunits are coded in raster scan order: left to right, top to bottom.
Starting from each key frame, decoders keep 8 frames buffered to be used as reference frames or to be shown later. Transmitted frames signal which buffer to overwrite and can optionally be decoded into one of the buffers without being shown. The encoder can send a minimal frame that just triggers one of the buffers to be displayed ("skip frame"). Each inter frame can reference up to three of the buffered frames for temporal prediction. Up to two of those reference frames can be used in each coding block to calculate a sample data prediction, using spatially displaced (motion compensation) content from a reference frame or an average of content from two reference frames ("compound prediction mode"). The (ideally small) remaining difference (delta encoding) from the computed prediction to the actual image content is transformed using a DCT or ADST (for edge blocks) and quantized.
Something like a b-frame can be coded while preserving the original frame order in the bitstream using a structure named superframes. Hidden alternate reference frames can be packed together with an ordinary inter frame and a skip frame that triggers display of previous hidden altref content from its reference frame buffer right after the accompanying p-frame.
VP9 enables lossless encoding by transmitting at the lowest quantization level (q index 0) an additional 4×4-block encoded Walsh–Hadamard transformed (WHT) residue signal.
In order to be seekable, raw VP9 bitstreams have to be encapsulated in a container format, for example Matroska (.mkv), its derived WebM format (.webm) or the older minimalistic Indeo video file (IVF) format which is traditionally supported by libvpx. VP9 is identified as V_VP9 in WebM and VP90 in MP4, adhering to respective naming conventions.
Adoption
Adobe Flash, which traditionally used VPx formats up to VP7, was never upgraded to VP8 or VP9, but instead to H.264. Therefore, VP9 often penetrated corresponding web applications only with the gradual shift from Flash to HTML5 technology, which was still somewhat immature when VP9 was introduced.
Trends towards UHD resolutions, higher color depth and wider gamuts are driving a shift towards new, specialized video formats. With the clear development perspective and support from the industry demonstrated by the founding of the Alliance for Open Media, as well as the pricey and complex licensing situation of HEVC it is expected that users of the hitherto leading MPEG formats will often switch to the royalty-free alternative formats of the VPx/AVx series instead of upgrading to HEVC.
Content providers
A main user of VP9 is Google's popular video platform YouTube, which offers VP9 video at all resolutions along with Opus audio in the WebM file format, through DASH streaming.
Another early adopter was Wikipedia (specifically Wikimedia Commons, which hosts multimedia files across Wikipedia's subpages and languages). Wikipedia endorses open and royalty-free multimedia formats. As of 2016, the three accepted video formats are VP9, VP8 and Theora.
Since December 2016, Netflix has used VP9 encoding for their catalog, alongside H.264 and HEVC. As of February 2020, AV1 has been started to be adopted for mobile devices, not unlike how VP9 has started on the platform.
Google Play Movies & TV uses (at least in part) VP9 profile 2 with Widevine DRM.
Stadia uses VP9 for video game streaming up to 4k on supported hardware like the Chromecast Ultra, supported mobile phones as well as computers.
Encoding services
A series of cloud encoding services offer VP9 encoding, including Amazon, Bitmovin, Brightcove, castLabs, JW Player, Telestream, and Wowza.
Encoding.com has offered VP9 encoding since Q4 2016, which amounted to a yearly average of 11% popularity for VP9 among its customers that year.
Web middleware
JW Player supports VP9 in its widely used software-as-a-service HTML5 video player.
Browser support
VP9 is implemented in these web browsers:
Chromium and Google Chrome (usable by default since version 29 from May and August 2013, respectively)
Opera (since version 15 from July 2013)
Firefox (since version 28 from March 2014)
Microsoft Edge (as of summer 2016)
Safari (as of Safari Technology Preview Release 110, with official support added in version 14)
Internet Explorer is missing VP9 support completely.
In March 2016 an estimated 65 to 75% of browsers in use on desktop and notebook systems were able to play VP9 videos in HTML5 webpages, based on data from StatCounter.
Operating system support
Media player software support
VP9 is supported in all major open source media player software, including VLC, MPlayer/MPlayer2/MPV, Kodi, MythTV, and FFplay.
Hardware device support
Android has had VP9 software decoding since version 4.4 "KitKat". For a list of consumer electronics with hardware support, including TVs, smartphones, set top boxes and game consoles, see webmproject.org's list.
Hardware implementations
The following chips, architectures, CPUs, GPUs and SoCs provide hardware acceleration of VP9. Some of these are known to have fixed function hardware, but this list also incorporates GPU or DSP based implementations – software implementations on non-CPU hardware. The latter category also serve the purpose of offloading the CPU, but power efficiency is not as good as the fixed function hardware (more comparable to well optimized SIMD aware software).
This is not a complete list. Further SoCs, as well as hardware IP vendors can be found at webmproject.org.
Video Game Consoles
The Sony PlayStation 5 supports capturing 1080p and 2160p footage using VP9 in a WebM container.
Software implementations
The reference implementation from Google is found in the free software programming library libvpx.
It has a single-pass and a two-pass encoding mode, but the single-pass mode is considered broken and does not offer effective control over the target bitrate.
Encoding
libvpx
SVT-VP9 – Scalable Video Technology for VP9 – open-source encoder by Intel
Eve – a commercial encoder
Decoding
libvpx
ffvp9 (FFmpeg)
FFmpeg's VP9 decoder takes advantage of a corpus of SIMD optimizations shared with other codecs to make it fast. A comparison made by an FFmpeg developer indicated that this was faster than libvpx, and compared to FFmpeg's h.264 decoder, "identical" performance for same-bitrate video, or about 10% faster for same-quality video.
Patent claims
In March 2019, Luxembourg-based Sisvel announced the formation of patent pools for VP9 and AV1. Members of the pools included JVCKenwood, NTT, Orange S.A., Philips, and Toshiba, all of whom were also licensing patents to the MPEG-LA for either the AVC, DASH, or the HEVC patent pools. A list of claimed patents was first published on 10 March 2020. This list contains over 650 patents.
Sisvel's prices are .24 Euros for display devices and .08 Euros for non-display devices using VP9, but would not seek royalties for encoded content. However, their license makes no exemption for software.
According to The WebM Project, Google does not plan to alter their current or upcoming usage plans of VP9 or AV1 even though they are aware of the patent pools, none of the licencors of the patent pools were involved in the development of VP9 or VP8, and third parties cannot be stopped from demanding licensing fees from any technology that is open-source, royalty-free, and/or free-of-charge.
Successor: from VP10 to AV1
On September 12, 2014, Google announced that development on VP10 had begun and that after the release of VP10 they planned to have an 18-month gap between releases of video formats. In August 2015, Google began to publish code for VP10.
However, Google decided to incorporate VP10 into AOMedia Video 1 (AV1). The AV1 codec was developed based on a combination of technologies from VP10, Daala (Xiph/Mozilla) and Thor (Cisco). Accordingly, Google has stated that they will not deploy VP10 internally nor officially release it, making VP9 the last of the VPx-based codecs to be released by Google.
References
External links
.
Bitstream Overview
Jan Ozer (Streaming Media Magazine), June 2016: VP9 Finally Comes of Age, But Is it Right for Everyone?
Ronald Bultje: Overview of the VP9 video codec, 13 December 2016
2012 software
Film and video technology
Free video codecs
Google software
Lossless compression algorithms
Lossy compression algorithms
Software using the BSD license
Video compression
Videotelephony |
35164689 | https://en.wikipedia.org/wiki/Mojette%20Transform | Mojette Transform | The Mojette Transform is an application of discrete geometry. More specifically, it is a discrete and exact version of the Radon transform, thus a projection operator.
The IRCCyN laboratory - UMR CNRS 6597 in Nantes, France has been developing it since 1994.
The first characteristic of the Mojette Transform is using only additions and subtractions. The second characteristic is that the Mojette Transform is redundant, spreading the initial geometrical information into several projections.
This transform uses discrete geometry in order to dispatch information onto a discrete geometrical support. This support is then projected by the Mojette operator along discrete directions. When enough projections are available, the initial information can be reconstructed.
The Mojette transform has been already used in numerous applications domains:
Medical tomography
Network packet transfer
Distributed storage on disks or networks
Image fingerprinting and image cryptography schemes
History
After one year of research, the first communication introducing the Mojette Transform was held in May 1995 in the first edition of CORESA National Congress CCITT Rennes. Many others will follow it for 18 years of existence. In 2011, the book The Mojette Transform: Theory and Applications at ISTE-Wiley was well received by the scientific community. All this support has encouraged the IRCCyN research team to continue the research on this topic.
Jeanpierre Guédon, professor and inventor of the transform called it: "Mojette Transform". The word "Mojette" comes from the name of white beans in Vendee, originally written "Moghette" or "Mojhette". In many countries, bean is a basic educational tool representing an exact unit that teaches visually additions and subtractions. Therefore, the choice of the name "Mojette" serves to emphasize the fact that the transform uses only exact unit in additions and subtractions.
There is an old French saying in Vendee: "counting his mojettes", meaning to know how to count his money. It is quite amazing that in the English-speaking world, the words "bean counter" refers to a non-zealous official making additions. An old English expression says "he knows how many beans make five", which means: "He knows his stuff".
The original purpose of the Mojette Transform was to create a discrete tool to divide the Fourier plane into angular and radial sectors. The first attempt of application was the psychovisual encoding of image, reproducing the human vision channel. However, it was never realized.
Mathematics
The "raw" transform Mojette definition is this:
The following figure 1 helps to explain the “raw” transform Mojette.
We start with the function f represented by 16 pixels from p1 to p16. The possible values of the function at the point (k, l) are different according to the applications. This can be a binary value of 0 or 1 that it often used to differentiate the object and the background. This can be a ternary value as in the Mojette game. This can be also a finite set of integers value from 0 to (n-1), or more often we take a set of cardinality equal to a power of 2 or a prime number. But it can be integers and real numbers with an infinite number of possibilities, even though this idea has never been used.
With the index "k" as "kolumn" and “l” as a “line”, we define a Cartesian coordinate system. But here we will only need the integer coordinates. On Figure 2, we have arbitrarily chosen the left bottom point as the origin (0,0) and the direction of the two axes. The coordinates of each pixel are denoted in red on Figure 2.
For the projections, the coordinate system is derived from the one of the grid. Indeed, it meets two requirements:
1) The pixel (0,0) is always projected on the point 0 of the projection (this is a consequence of linearity of the Mojette operator)
2) The direction of the projection is fixed "counterclockwise" as in trigonometry when going from 0 ° to 180 °.
Altogether, it necessarily gives the positions of the bins like the ones in blue color on the Figure 2.
Let’s head back to the formula (1): the red dots correspond to the index (k, l) and the blue dots to the index b. The only elements remaining to clarify are the (p, q) values.
These two values (p, q) are precisely those characterizing the Mojette Transform. They define the projection angle. Figure 3 shows colored arrows corresponding with the color code to the projection indexed by (p, q). For the 90° angle, the projection is shown below the grid for convenience but the direction is upward. Table 1 shows the correspondence between the angles in degrees and the values of p and q.
Table 1 : The correspondence of the angles projections with direction equation b + qk - pl = 0
The only valid Mojette angles are given by the following rules:
An angle is given by the direction of projection in line and column
A direction is composed of two integers (p, q) with gcd (p, q) = 1
An angle is always between 0 and 180 °, which means that q is never negative
These rules ensure the uniqueness in the correspondence of an angle and (p, q) values. For example, the 45 ° angle, the Rule 2 forbid to define the angle pairs (2,2) or (3,3) and Rule 3 prohibits to use (-2, -2) and (-1, -1). Only the angle (p = 1, q = 1) satisfies the three rules.
Applications & Achievements
The distributed storage disk or network
The most important area of application using the "Mojette Transform" is distributed storage. Particularly, this method is used in RozoFS, an open-source distributed file system. In this application, the "Mojette Transform" is used as an erasure code in order to provide reliability, while significantly reducing the total amount of stored data when compared to classical techniques like replication (typically by a factor of 2). Thus, it significantly reduces the cost of the storage cluster in terms of hardware, maintenance or energy consumption for example.
In 2010, Pierre Evenou, research engineer of the IVC team IRCCyN laboratory, decided to create the start-up Fizians (currently known as Rozo Systems) using this application. The start-up offers storage solutions in cloud computing, virtualization, storage servers, file servers, backup and archiving.
Networks packets transfer
Thanks to the redundancy of the transform, sent packets can be fragmented without loss. Additionally, the fact of using only additions and subtractions increases the speed of information transmission. Finally, the information cannot be reconstructed without having the initial angle of the projections, so it also provides data security.
This application has been selected by Thales Cholet for its ad hoc network (using wireless network and terminals to transmit messages between them) in order to secure the information and has multiple paths between the source and destination. In 2002, the start-up PIBI has used this technology to provide secure Internet payment services.
The Medical tomography
In the field of medical imaging, the properties of the “Transform Mojette” create a direct mapping and solve the missing wedge problem. However, the image acquisition using the Mojette transform has not been yet developed. The problem of obtaining exact “Mojette” values while using approximated data acquisition has been studied but has to be continued. Besides, the post-processing of medical images is doing well since data acquisition is already done.
These results are used by the company Keosys in 2001 with Jerome Fortineau and the company Qualiformed created in 2006 by Stephen Beaumont. Prof. Guédon and the IRCCyN laboratory were heavily involved in the creation of these two companies. The companies have already financed several PhD students and participated in research projects in order to continue the development of the application in medical tomography. The results have led to apply patents and implementation on their equipment of image processing.
The watermarking and image encryption
Cryptography and watermarking were also part of the research conducted in the IRCCyN laboratory. It provides solutions for security and authentication.
In cryptography, the instability of the transformed Mojette secures data. The fact that the transform is exact encrypts information and allows no deviation even minimal. For watermarking, the transform is very effective in fingerprinting. By inserting "Mojette Transform" marks in images, one can authenticate documents using the same properties as in cryptography.
Bibliography
Jeanpierre Guédon, N. Normand, B. Parrein, and C. Pouliquen, “Distributed image transmission and storage on Internet system,” in ACIDCA, 2000, pp. 164–169.
B. Parrein, N. Normand, and J. Guédon, “Multiple description coding using exact discrete Radon transform,” in IEEE Data Compression Conference, 2001, p. 508.
J. Guédon, N. Normand, P. Verbert, B. Parrein, F. Autrusseau, “Load-balancing and scalable multimedia distribution using the Mojette transform,” in Internet Multimedia Management Systems II, ITCOM, 2001, pp. 226–234.
J. Guédon, B. Parrein, N. Normand, “Internet Distributed Image Information System,” Integrated Computer-Aided Engineering, vol. 8, no. 3, pp. 205–214, Sep. 2008.
B. Parrein, “Description multiple de l’information par transformation Mojette,” Université de Nantes, 2008.
F. Autrusseau and J. Guédon, “Image watermarking for copyright protection and data hiding via the Mojette transform,” in Security and Watermarking of Multimedia Contents IV, 2002, pp. 378–386.
F. Autrusseau and J. Guédon, “Image Watermarking in the Fourier Domain Using the Mojette Transform,” in Digital Signal Processing, 2002, pp. 725–728.
F. Autrusseau, “Modélisation Psychovisuelle pour le tatouage des images,” Université de Nantes, 2011.
F. Autrusseau and J. Guédon, “A joint multiple description-encryption image algorithm,” in International Conference on Image Processing, 2003, pp. 269–272.
J. Guédon, N. Normand, and B. Parrein, “Multimedia packet transport: multiple layers or descriptions?,” in IEEE Packet Video workshop, 2003, p. 7 p.
B. Parrein, N. Normand, and J. Guédon, “Multimedia forward error correcting codes for wireless LAN,” Annales des Télécommunications, vol. 58, no. 3–4, pp. 448–463, Jul. 2008.
F. Autrusseau and J. Guédon, “Chiffrement Mojette d’images médicales,” Ingénierie des Systèmes d’Information (ISI), vol. 8, no. 1, pp. 113–134, Feb. 2008.
O. Déforges, M. Babel, N. Normand, B. Parrein, J. Ronsin, J. Guédon, and L. Bédat, “Le LAR aux Mojettes,” in CORESA 04 - COmpression et REprésentation des Signaux Audiovisuels, 2004, pp. 165–168.
P. Verbert, V. Ricordel, J. Guédon, and P. Verbert, “Analysis of mojette transform projections for an efficient coding,” in Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS, 2004, p. -.
M. Babel, B. Parrein, O. Déforges, N. Normand, J. Guédon, and J. Ronsin, “Secured and progressive transmission of compressed images on the Internet: application to telemedicine,” in SPIE 17th Annual Symposium / Electronic Imaging - Internet Imaging, 2005, pp. 126–136.
J. Guédon and N. Normand, “The Mojette Transform: The First Ten Years,” in Discrete Geometry for Computer Imagery, 2005, vol. 3429, pp. 79–91.
M. Servières, N. Normand, J. Guédon, and Y. Bizais, “The Mojette Transform: Discrete Angles for Tomography,” in Discrete Tomography and its Applications, 2005, vol. 20, pp. 587–606.
M. Servieres, “Reconstruction Tomographique Mojette,” Université de Nantes; Ecole centrale de nantes - ECN, 2009.
F. Autrusseau, P. Evenou, and T. Hamon, “Secure Distributed Storage based on the Mojette transform,” in Nouvelles technologies de la répartition, 2006, pp. 161–170.
F. Autrusseau, B. Parrein, and M. Servieres, “Lossless Compression Based on a Discrete and Exact Radon Transform: A Preliminary Study,” in International Conference on Acoustics, Speech and Signal Processing, 2006, pp. 425–428.
M. Kalantari, F. Jung, G. Moreau, and J. Guédon, “Détection entièrement automatique de points de fuite dans des scènes architecturales urbaines,” in CORESA 2006 COmpression et REprésentation des Signaux Audiovisuels, 2006, pp. 41–46.
E. Denis, J. Guédon, S. Beaumont, and N. Normand, “Discrete and continuous description of a three dimensional scene for quality control of radiotherapy treatment planning systems,” in Medical Imaging, 2006, vol. 6142, p. 187.
M. Servières, N. Normand, and J. Guédon, “Interpolation method for the Mojette Transform,” in Medical Imaging 2006: Physics of Medical Imaging, 2006, vol. 6142, p. 61424I.
N. Normand, A. Kingston, and P. Évenou, “A Geometry Driven Reconstruction Algorithm for the Mojette Transform,” in Discrete Geometry for Computer Imagery, 2006, vol. 4245, pp. 122–133.
S. Hamma, E. Cizeron, H. Issaka, and J. Guédon, “Performance evaluation of reactive and proactive routing protocol in IEEE 802.11 ad hoc network,” in ITCom 06 - next generation and sensor networks, 2008, p. 638709.
M. Kalantari and M. Kasser, “Implementation of a low-cost photogrammetric methodology for 3d modelling of ceramic fragments,” in XXI International CIPA Symposium, 01-6 October, Athens, Greece, 2007, p. FP079.
A. Kingston, S. Colosimo, P. Campisi, and F. Autrusseau, “Lossless Image Compression and Selective Encryption Using a Discrete Radon Transform,” in International Conference on Image Processing, 2007, pp. 465–468.
E. Denis, S. Beaumont, J. Guédon, N. Normand, and T. Torfeh, “Automatic quality control of digitally reconstructed radiograph computation and comparison with standard methods,” in Medical Imaging 2007: Physics of Medical Imaging, 2007, vol. 6510, p. 65104J.
A. Daurat and N. Normand, “Transformation et reconstruction par projections,” in Géométrie discrète et images numériques, A. M. David Coeurjolly, Ed. Hermès, 2008, pp. 239–251.
N. Normand and J. Guédon, “Applications de la transformation Mojette,” in Géométrie discrète et images numériques, A. M. David Coeurjolly, Ed. Hermès, 2008, pp. 337–347.
B. Parrein, F. Boulos, P. Le Callet, and J. Guédon, “Priority image and video encoding transmission based on a discrete Radon transform,” in IEEE Packet Video 2007, 2007, p. 6 pages.
S. Chandra, I. Svalbe, and J. Guédon, “An exact, non-iterative Mojette inversion technique utilising ghosts,” in 14th IAPR international conference on Discrete geometry for computer imagery, 2008, p. .
H. Fayad, J. Guédon, I. Svalbe, N. Normand, and Y. Bizais, “Mojette and FRT tomographs,” in Medical Imaging 2008, 2008, vol. 6913, p. -.
M. Kalantari, F. Jung, J. Guédon, and N. Paparoditis, “Détection automatique des points de fuite et calcul de leur incertitude à l’aide de la géométrie projective,” in RFIA 2008, 2008, pp. 703–712.
M. Kalantari, F. Jung, N. Paparoditis, and J. Guédon, “Robust and automatic vanishing points detection with their uncertainties from a single uncalibrated image, by planes extraction on the unit SPHERE,” in ISPRS2008, 2008, pp. 203–208.
H. Fayad, J. Guédon, I. Svalbe, Y. Bizais, and N. Normand, “Applying Mojette discrete Radon transforms to classical tomographic data,” in Medical Imaging, 2008, vol. 6913, p. 69132S.
A. Kingston and F. Autrusseau, “Lossless Image Compression via Predictive Coding of Discrete Radon Projections,” Signal Processing Image Communication, vol. 23, no. 4, pp. 313–324, Jun. 2008.
E. Denis, S. Beaumont, J. Guédon, T. Torfeh, N. Normand, and A. Norbert, “New automatic quality control methods for geometrical treatment planning system tools in external conformal radiotherapy,” in Medical Imaging 2008: Physics of Medical Imaging, 2008, vol. 6913, p. 69133F.
M. Babel, B. Parrein, O. Déforges, N. Normand, J. Guédon, and V. Coat, “Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet,” Computerized Medical Imaging and Graphics, vol. 32, no. 4, pp. 258–269, Apr. 2008.
E. Denis, S. Beaumont, J. Guédon, T. Torfeh, N. Normand, and N. Ailleres, “Nouvelle méthode automatique de contrôle de qualité des systèmes de planification géométrique des traitements en radiothérapie externe conformationnelle,” in Journées scientifiques de la Société Française de Physique Médicale, 2008, p. denis.
A. Kingston, B. Parrein, and F. Autrusseau, “Redundant Image Representation via Multi-Scale Digital Radon Projection,” in International Conf. of Image Processing, 2008, p. 2069.
P. Jia, J. Dong, L. Qi, and F. Autrusseau, “Directionality Measurement and Illumination Estimation of 3D Surface Textures by Using Mojette Transform,” in 19th International Conference on Pattern Recognition, 2010, p. 1144.
Y. Ben Hdech, J. Guédon, and S. Beaumont, “Simulations Monte Carlo d’un faisceau de RX issus d’un accélérateur VARIAN : influence du paramétrage des électrons initiaux,” in Journées Scientifiques de la Société Française de Physique Médicale (SFPM) 2009 : Innovations et bénéfices thérapeutiques : quelles limites?, 2009, p. 1.
Y. Ben Hdech, J. Guédon, and S. Beaumont, “Des Objets-Tests Numériques (OTN) anatomiques pour le Contrôle Qualité (CQ) de Systèmes de Planification de Traitement (TPS) en radiothérapie,” in Journées Scientifiques de la Société Française de Physique Médicale (SFPM) 2009 : Innovations et bénéfices thérapeutiques : quelles limites?, 2009, p. 1.
M. Kalantari, F. Jung, J. Guédon, and N. Paparoditis, “The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration,” in The Pacific-Rim Symposium on Image and Video Technology (PSIVT), 2009, vol. 5414, p. .
D. Coeurjolly and N. Normand, “Discrete geometry and projections (chap 1),” in The Mojette Transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 15 pages.
J. Guédon and N. Normand, “Reconstructability with the inverse Mojette transform (chap 4),” in The Mojette Transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 15 pages.
J. Guédon and N. Normand, “Direct Mojette transform (chap 3),” in The Mojette Transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 23 pages.
A. Kingston and F. Autrusseau, “Lossless compression (chap 9),” in The Mojette transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 19 pages.
A. Kingston, F. Autrusseau, E. Grall, T. Hamon, and B. Parrein, “Mojette based security (chap 10),” in The Mojette transform: Theory and Applications, J. Guédon, Ed. iste & wiley, 2009, p. 25 pages.
A. Kingston, F. Autrusseau, and B. Parrein, “Multiresolution Mojette transform (chap 6),” in The Mojette transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 29 pages.
N. Normand, I. Svalbe, P. Evenou, and A. Kingston, “Inverse Mojette transform algorithms (chap 5),” in The Mojette Transform: Theory and Applications, J. Guédon, Ed. iste & wiley, 2009, p. 25 pages.
B. Parrein, F. Boulos, N. Normand, and P. Evenou, “Communication, networks and storage (chap 7),” in The Mojette Transform: Theory and Applications, J. Guédon, Ed. iste & wiley, 2009, p. 29 pages.
M. Servières, J. Guédon, N. Normand, and Y. Bizais, “Mojette discrete tomography (chap 8),” in The Mojette Transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 29 pages.
I. Svalbe and J. Guédon, “Discrete versions of the Radon Transform (chap 2),” in The Mojette Transform: Theory and Applications, jeanpierre Guédon, Ed. iste & wiley, 2009, p. 17 pages.
J. Guédon, The Mojette transform. Theory and applications. ISTE-WILEY, 2009.
S. Beaumont, J. Guédon, and Y. Ben Hdech, “Contrôle qualité dosimétrique des systèmes de planification de traitement : nouvelle méthode basée sur l’utilisation de PENELOPE et des Objets Tests Numériques anatomiques,” in Journées Scientifiques de la Société Française de Physique Médicale (SFPM), 2010, p. 1.
Y. Ben Hdech, S. Beaumont, and J. Guédon, “Développement d’une méthode de Contrôle qualité des Systèmes de Planification des Traitements, utilisés en radiothérapie, au moyen du code Monte-Carlo PENELOPE et des Objets Tests Numériques,” in Journée des doctorants de l’École Doctorale STIM JDOC, 2010, p. 1.
Y. Ben Hdech, S. Beaumont, J. Guédon, and T. Torfeh, “New method to perform dosimetric quality control of treatment planning system using PENELOPE Monte-Carlo and anatomical digital test objects,” in SPIE Medical Imaging 2010, 2010, vol. 7622, p. .
Y. Amouriq, P. Evenou, A. Arlicot, N. Normand, and P. Layrolle, “Evaluation of trabecular bone patterns on dental radiographic images: influence of cortical bone,” in SPIE Medical Imaging, 2010, vol. 7626, p. 76261M.
Y. Amouriq, P. Evenou, A. Arlicot, N. Normand, P. Layrolle, P. Weiss, and J. Guédon, “Evaluation of trabecular bone patterns on dental radiographic images: influence of cortical bone,” in SPIE Medical Imaging, 2010, p. 10 pages.
A. Arlicot, Y. Amouriq, P. Evenou, N. Normand, and J. Guédon, “A single scan skeletonization algorithm: application to medical imaging of trabecular bone,” in SPIE Medical Imaging, 2010, vol. 7623, p. 762317.
C. Zhang, J. Dong, J. Li, and F. Autrusseau, “A New Information Hiding Method for Image Watermarking Based on Mojette Transform,” in Second International Symposium on Networking and Network Security, 2010, pp. 124–128.
N. Normand, I. Svalbe, B. Parrein, and A. Kingston, “Erasure Coding with the Finite Radon Transform,” in Wireless Communications & Networking Conference, 2010, pp. 1–6.
S. S. Chandra, N. Normand, A. Kingston, J. Guédon, and I. Svalbe, “Fast Mojette Transform for Discrete Tomography,” 13-Jul-2012.
J. Guédon, C. Liu, and J. Guédon, “The 2 and 3 materials scene reconstructed from some line Mojette projections,” in IEEE IPTA Conference, 2010, p. 6.
Y. Amouriq, J. Guédon, N. Normand, A. Arlicot, Y. Ben Hdech, and P. Weiss, “Bone texture analysis on dental radiographic images: results with several angulated radiographs on the same region of interest,” in SPIE Medical Imaging 2011: Biomedical Applications in Molecular, Structural, and Functional Imaging, 2012, vol. 7965, p. 796525.
S. Beaumont, T. Torfeh, R. Latreille, Y. Ben Hdech, and J. Guédon, “New method to test the gantry, collimator and table rotation angles of a linear accelerator used in radiation therapy,” in SPIE Medical Imaging 2011, 2011, vol. 7961, p. 796153.
Y. Ben Hdech, S. Beaumont, J. Guédon, and C. Sylvain, “Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object,” in Medical Imaging 2011: Physics of Medical Imaging, 2011, vol. 7961, p. 79613F.
A. Arlicot, P. Evenou, and N. Normand, “Single-scan skeletonization driven by a neighborhood-sequence distance,” in International workshop on Combinatorial Image Analysis, IWCIA, 2011, pp. 61–72.
A. Arlicot, N. Normand, Y. Amouriq, and J. Guédon, “Extraction of bone structure with a single-scan skeletonization driven by distance,” in First Sino-French Workshop on Education and Research collaborations in Information and Communication Technologies, SIFWICT, 2011, p. 2 pages.
Y. Ben Hdech, D. Autret, S. Beaumont, and J. Guédon, “TPS dosimetric evaluation using 1540-IAEA Package and Monte-Carlo simulations,” in ESTRO International Oncology Forum, 2011, p. 1.
C. Liu, J. Guédon, I. Svalbe, and Y. Amouriq, “Line Mojette ternary reconstructions and ghosts,” in IWCIA, 2011, p. 11.
C. Liu and J. Guédon, “The limited material scenes reconstructed by line Mojette algorithms,” in Franco-Chinese conference, 2011, p. 2.
J. Dong, L. Su, Y. Zhang, F. Autrusseau, and Y. Zhanbin, “Estimating Illumination Direction of 3D Surface Texture Based on Active Basis and Mojette Transform,” Journal of Electronic Imaging, vol. 21, no. 013023, p. 28 pages, Apr. 2012.
D. Pertin, G. D’Ippolito, N. Normand, and B. Parrein, “Spatial Implementation for Erasure Coding by Finite Radon Transform,” in International Symposium on signal, Image, Video and Communication 2012, 2012, pp. 1–4.
P. Bléry, Y. Amouriq, J. Guédon, P. Pilet, N. Normand, N. Durand, F. Espitalier, A. Arlicot, O. Malard, and P. Weiss, “Microarchitecture of irradiated bone: comparison with healthy bone,” in SPIE Medical Imaging, 2012, vol. 8317, p. 831719.
S. Chandra, I. Svalbe, J. Guedon, A. Kingston, and N. Normand, “Recovering Missing Slices of the Discrete Fourier Transform using Ghosts,” IEEE Transactions on Image Processing, vol. 21, no. 10, pp. 4431–4441, Jul. 2012.
H. Der Sarkissian, Jp. Guédon, P. Tervé, N. Normand and I. Svalbe. (2012)." Evaluation of Discrete Angles Rotation Degradation for Myocardial Perfusion Imaging", EANM Annual Congress 2012.
C. Liu and J. Guédon, “Finding all solutions of the 3 materials problem,” in proceedings of SIFWICT, 2013, p. 6.
B. Recur, H. Der Sarkissian, Jp. Guédon and I.Svalbe, "Tomosynthèse à l’aide de transformées discrètes", in Proceeding TAIMA 2013
H. Der Sarkissian, B. Recur, N. Normand and Jp. Guédon, "Mojette space Transformations", in proceedings of SWIFCT 2013.
B. Recur, H. Der Sarkissian, M. Servières, N.Normand, Jp. Guédon, "Validation of Mojette Reconstruction from Radon Acquisitions" in Proceedings of 2013 IEEE International Conference on Image Processing.
H. Der Sarkissian, B. Recur, N. Normand, Jp. Guédon. (2013), "Rotations in the Mojette Space" in 2013 IEEE International Conference on Image Processing.
External links
Website of the IVC team of the IRCCyN lab
Game on line based on the Mojette transform
Official website of ROZOFS company
Official website of KEOSYS company
Official website of QUALIFORMED company
Signal processing |
715999 | https://en.wikipedia.org/wiki/Parallel%20Line%20Internet%20Protocol | Parallel Line Internet Protocol | The Parallel Line Internet Protocol (PLIP) is a computer networking protocol for direct computer-to-computer communications using the parallel port normally used for connections to a printer.
The Parallel Line Internet Protocol provides link layer services for the Internet Protocol, the protocol used for forming small local area networks and large computer networks, such as the Internet, enabling computers without standard dedicated networking hardware, such as Ethernet, but with older parallel port devices, to communicate.
Operation
The Internet Protocol Suite is the standards-based networking model and software specification for forming small and large computer networks, from local area networks to global communication systems, such as the Internet. It is usually implemented by software and hardware features that use Ethernet network interface cards, cabling, and networking switches or hubs.
Early personal computers did not have Ethernet hardware included in their design and bus adapters were initially expensive. A solution to was to use the, at the time, standard parallel port, typically used for connection to a printer or similar output device. The ports on two computers are connected with a so-called null-printer cable, sometimes called a LapLink cable.
The laplink cable connects five output pins of a parallel port to five input pins on the opposing port, for each direction. Due to the lack of an internal timing in the parallel ports, synchronization is implemented via software handshaking: four of the five pins are used for data transfer and one is used for synchronization. The logical values at these pins are read and written directly by the software via an input or output instruction.
This method does not connect the bidirectional data lines of the two devices, in order to avoid both lines being active at the same time. The status lines ERROR, SLCT, PAPOUT, ACK and BUSY on one device are connected to data pins d0 through d4 respectively on the other.
Transmission of a byte is accomplished by dividing it into two nibbles of four bits each. Each nibble is transmitted by setting the four data lines according to the four nibble bits and then toggling the acknowledge line. This toggle indicates the receiving host that the nibble is ready to be read. Once the receiving host has read the nibble, it toggles its synchronization line to tell the transmitter that the nibble has been read and that a new one may be sent. Both hosts use a toggle on their acknowledge lines to indicate that the read or write operation has been performed. As a result, each host has to wait for a toggle from the other host before proceeding with a new operation.
As an example, the transfer of nibble proceeds as follows:
t->r lines r->s lines operation
00010 0xxxx transmitter sets data lines to 0010
10010 0xxxx transmitter toggle ACK line
receiver detects toggle and reads 0010
10010 1xxxx receiver toggle ACK line
transmitter detects toggle
When the transmitter detects the toggle, this procedure is repeated for the next nibble.
Internet Protocol packets are sent over the line after encapsulating them into PLIP packets before transmission over the line. The encapsulated packet has the following structure:
packet length: 2 bytes, little endian
ethernet header (mostly used for backward compatibility)
the IP packet
checksum: 1 byte, sum modulo 256 of bytes in the packet
The length and checksum are calculated over the second and third field only, so that the actual total length of the packet is three more than the length as reported in the first two bytes of the packet.
Similar methods
An analogous feature for serial communications ports is the Serial Line Internet Protocol (SLIP), using null-modem cables, but allows transfer of four bits at a time rather than one. It generally works at higher bitrates. The method is based on the "Crynwr" standard devised by Russ Nelson.
Ethernet may also be used as a direct computer-to-computer communications method using an Ethernet crossover cable.
Other point-to-point connections, such as USB host-to-host bridges or cables are also used to transfer files between two computers where a network is not necessary or available.
See also
PPP Point-to-Point Protocol
Direct cable connection
Serial Line Internet Protocol (SLIP)
References
External links
PLIP explanation
PLIP install how-to
PLIP-Install-HOWTO
A description of the PLIP protocol by Alessandro Rubini
Internet protocols
Link protocols |
870875 | https://en.wikipedia.org/wiki/ART%20image%20file%20format | ART image file format | ART is a proprietary image file format used mostly by the America Online (AOL) service and client software.
Technical details
The ART format (file extension ".art") holds a single still image that has been highly compressed. The format was designed to facilitate the quick downloading of images, among other things. Originally, the compression was developed by the Johnson-Grace Company, which was then acquired by AOL. When an image is converted to the ART format, the image is analyzed and the software decides what compression technique would be best. The ART format has similarities to the progressive JPEG format, and certain attributes of the ART format can lead to image quality being sacrificed for the sake of image compression (for instance, the image's color palette can be limited).
Usage by AOL
The AOL service used the ART image format for most of the image presentation of the online service. In addition, the AOL client's web browser also automatically served such images in the ART format to achieve faster downloads on the slower dialup connections that were prevalent in those days. This conversion was done in the AOL proxy servers and could be optionally disabled by the user. This image conversion process effectively reduced the download time for image files. This technology was once branded as Turboweb and is now known as AOL TopSpeed.
Software support for ART
The Graphic Workshop Professional software from Alchemy Mindworks Corp. supports ART files. (With later versions of the Graphic Workshop Professional software, an ART plugin from Alchemy Mindworks is required for this support.) For the Windows 2000 platform, Microsoft released the AOL Image Support Update which added support for ART images. As of June 2006 the Internet Explorer browser no longer supported ART files when Microsoft released a security update. Among other things, this update removed support for ART files from the Internet Explorer browser in order to help prevent issues where invalid ART data could cause the Internet Explorer software to unexpectedly quit. An ART file that is opened with the AOL picture viewer can be saved as another file type.
Software Mfg. Delcam (a subsidiary of Autodesk) has a product called Artcam which uses the ART file extension to create CAD/CAM files.
References
External links
Data compression using adaptive bit allocation and hybrid lossless entropy coding
Optimal spline interpolation for image compression
Method and apparatus for compressing images
AOL
Graphics file formats |
44946634 | https://en.wikipedia.org/wiki/The%20Future%20of%20Freedom%20Conference | The Future of Freedom Conference | The Future of Freedom Conference is regarded as the first explicitly libertarian conference series ever held in the United States. Debuting in 1969, the conference's keynote speaker was Austrian economist Prof. Ludwig von Mises.
The Ludwig von Mises Conference (1969)
More than 200 students attended the Ludwig von Mises Conference that was held at Long Beach State University, now known as California State University, Long Beach, in May 1969, in response to Young Americans for Freedom's (YAF) purges of libertarian leaders just before the infamous national YAF St. Louis convention in August 1969.
In early March 1969, Dana Rohrabacher and Shawn Steel, co-chairs of California YAF, were removed by National YAF. Many purged leaders, and county chairs would eventually organize a new student organization called the California Libertarian Alliance (CLA). One of their first endeavors was to hold a gathering of libertarian leaders, writers and economists.
The idea to have some type of gathering evolved into a full-fledged conference at a college. The conference was initially planned and organized under the leadership of Dana Rohrabacher, who was the main founder and chairman of the Libertarian Caucus of YAF from 1966 to 69. Dana Rohrabacher, known as the "Johnny Grass-seed" of radical YAFers, later became a journalist, a speechwriter for President Reagan, and a U.S. Congressman in Southern California.
Other purged YAF members involved in the 1969 conference included the following:
Gene Berkman, draft resister, later to become owner of Renaissance Books in Riverside, CA; Bill "Shawn" Steel, USC student and statewide chairman of Youth for Reagan, later to become an attorney, a founder of the California Libertarian Party, and chairman of the California Republican Party; Ron Kimberling, later Dr. Ron Kimberling, radio show commentator who became executive director of the Ronald Reagan Foundation and Assistant Secretary for Higher Education in the last years of the Reagan administration; Dennis Turner, writer for Reason magazine and computer programmer; John Schurman, psychology major and staff worker for Rampart College.
In 1981 Shawn Steel commented about the reasons for the first conference, writing that "Freedom oriented people found themselves abandoned, either purged from the right or the left. Because of this political turmoil, we invited decentralists, individualists and voluntaryists in one forum to organize, discuss and study the philosophy we now call 'libertarianism.'"
Other speakers at 1969's Ludwig von Mises Conference included the following:
R. C. Hoiles, longtime publisher of The Register (now known as the Orange County Register) in Santa Ana, CA; Robert LeFevre, Rampart College founder and author; Skye D'Aureous (Durk Pearson), MIT graduate with a triple major in physics, biology, and psychology; John Hospers, USC philosophy professor.
Gary North, a conservative writer for the Christian newsletter Chalcedon Report, was horrified by what he saw at the conference. He accused the participants of "secular libertarianism" which he believed to be suicidal, especially the sinfulness of those who take illegal drugs. Instead of finding a conference hall full of "studious conservatives affirming faith in God and country," North instead discovered "eccentrics waving the black dollar sign flag" of anarchy.
The Ludwig von Mises Conference was sponsored by Long Beach State University YAF, California State University San Fernando Valley YAF, and the Action Coalition for Freedom.
Left-Right Festival of Mind Liberation (1970)
On February 28 and March 1, 1970, the California Libertarian Alliance hosted the Left-Right Festival of Mind Liberation at the University of Southern California (USC), backed by Riqui and Seymour Leon of Robert LeFevre's relocated Rampart Institute in Santa Ana, California. This conference attempted to patch differences between left and right anti-statist and anti-authoritarian thinkers, but failed to generate "any potential Left-Right coalition in the gestation stage."
Rebecca E. Hlatch in A Generation Divided, reported "five hundred delegates met to discuss possibilities for a right-to-left cooperation.” According to Dana Rohrabacher, he had high hopes of “forming a coalition between libertarians on the right and the pro-freedom elements on the left.”
The keynote speaker was former president of Students for a Democratic Society (SDS) and author of Containment and Change, Carl Oglesby. "Designed to lay the groundwork for a libertarian/New Left anti-war coalition, Oglesby made the case that 'the Old Right and the New Left' were 'morally and politically' united in their opposition to war, and should work together."
Other featured speakers included the following:
William Allen, University of California Los Angeles (UCLA) economist; F. A. Harper, founder of the Institute for Humane Studies; Rod Manis, Stanford University research economist and writer for Rampart College; John Hospers, USC philosophy professor; Tibor Machan, an owner of Reason magazine and doctoral candidate at University of California Santa Barbara (UCSB); Karl Hess, former speech writer for Senator Barry Goldwater, Newsweek editor and author of Community Technology; Dana Rohrabacher, purged California chairman of Young Americans for Freedom; Samuel Edward Konkin III (a.k.a. SEK3), chemistry graduate student and editor of New Libertarian Notes at New York University; Phillip Abbott Luce, a defector from the pro-red Chinese Progressive Labor Movement in 1964, author of Road to Revolution, and recently resigned college director of YAF.
Other notable speakers - at general sessions or in workshops included the following:
Harvey Hukari, former chair of Stanford University YAF, and a founder of the Free Campus Movement; Harry Pollard, president of the Henry George School in Los Angeles; Don Jackson and Marcus Overseth, gay-rights activists; Robert Sagehorn, author, editor of the Western World Review and an associate of Western World Press; Terry Catchpole, editor and writer for National Lampoon; Skye D'Aureous (Durk Pearson), MIT graduate with a triple major in physics, biology, and psychology and co-publisher of The Libertarian Connection; Natalee Hall (Sandy Shaw), co-publisher of The Libertarian Connection; Willis E. Stone, founder and chairman of the Liberty Amendment Committee; William Harold Hutt, author and Austro-classical English economist noted for his early work in opposition to South African apartheid; Harold Demsetz, University of Chicago economist; Leon Kaspersky, co-founder of the underground libertarian newspaper Protos; Filthy Pierre (Erwin S. Strauss), author, "filk" musician, and science fiction convention organizer; John Haag, co-founder of the California Peace and Freedom Party; Richard Grant, author of The Incredible Bread Machine; Stan Kohl, war resister advocate; Randy Ericson; Bill Colson; Don Meinshausen, former YAF activist and a founder of the New Jersey Libertarian Alliance.
According to an article in the USC's Daily Trojan, "the California Libertarian Alliance, also cosponsor of the conference, states, 'The purpose of the conference is to unite libertarians and anarchists who have been active in the right wing and the new left, to find a means by which they can work together, without misunderstanding or antagonism.'"
The main organizers for the Left-Right Festival of Mind Liberation were Dana Rohrabacher, Bill "Shawn" Steel, and Gene Berkman. Steel also emceed. Action Coalition for Freedom (Don Franzen) and the California Libertarian Alliance sponsored the event.
The Festival of Liberation (1970)
The Annual Festival of Liberation, as it was now being called, attracted over 700 attendees to the University of Southern California (USC) from November 14 to 15, 1970, "to promote alternatives to authoritarianism and statism." City editor of the USC Daily Trojan, Linda Bieber, stated that the festival would "focus on the idea of knocking out oppressive and authoritarian cultures by libertarian social revolution and the idea that violent revolution will not eliminate the authoritarians, but instead will trade them in for newer models."
The conference featured the following speakers:
Paul Goodman, social critic, pacifist, left anarchist and author of Growing Up Absurd; Murray Rothbard, anarcho-capitalist and professor of economics at Brooklyn Polytechnic; Thomas Szasz, professor of psychiatry from the State University of New York at Syracuse; Phillip Abbott Luce, a defector from the pro-red Chinese Progressive Labor Movement in 1964, author of Road to Revolution; Joel Fort, University of California Berkeley professor, physician and author; Robert LeFevre, radio personality, author and founder of Rampart College; Skye D’Aureous (Durk Pearson), MIT graduate with a triple major in physics, biology, and psychology and cybernetics specialist; Leiflumen, education expert; Dana Rohrabacher, the student field representative for Rampart College; Robert Love, president of the Love Box company.
Moderator Lowell Ponte was a freelance writer and contributing editor to USC's Daily Trojan, and freelance writer and KPFK-FM radio talk show host. Commenting about the conference, Ponte wrote in the Daily Trojan "...Important as a basis for agreement was a mutual fear of the expanding power of government and the threat to individual liberty it represents. In some cases this fear envisions government, with its manipulative technologies now under development, as an incipient Brave New World."
Workshop sessions were conducted by the Institute for the Study of Non-Violence, founded by singer Joan Baez; the Center for the Study of Democratic Institutions; and the Portola Institute.
Movie night included James Stewart's "Shenandoah."
Rampart College, California Libertarian Alliance, and Action Coalition for Freedom sponsored 1970's Festival of Liberation.
Symposium on Political Implications of Modern Psychology (1972)
This conference was produced at USC's Town and Gown Foyer, February 12–13, 1972. According to the Daily Trojan, the topics included "the similarities between the humanist and the libertarian, the authoritarian personality, the state and the individualist behavior, political authoritarianism, and the need for self-respect".
The symposium featured the following speakers:
Dr. Nathaniel Branden, author, psychotherapist and former associate of novelist Ayn Rand; Robert LeFevre, author, TV/radio broadcaster and founder of Rampart College; George Bach, a clinical psychologist; Carl Faber, UCLA psychology professor; David Harris, draft resister and author of Goliath; Don Lewis, psychology professor; Alan Ross, psychology professor; Everett Shostrom, psychologist and author of Man the Manipulator; Roy Childs, libertarian essayist and writer of the influential essay "An Open Letter to Ayn Rand"; Carl Rogers, author of "Freedom to Learn: A View of What Education Might Become" and one of the founders of the humanistic approach to psychology.
Psychology Professor Alan Ross debated Don Lewis, chairman of the Psychology Department at USC, on "humanist vs. behaviorist" theories.
1972's Symposium on Political Implications of Modern Psychology was sponsored by the California Libertarian Alliance.
The Future of Victimless Crimes (1973)
The Future of Victimless Crimes was held at USC in February 1973. Featured speakers included the following:
Thomas Szasz, psychiatrist and author of The Myth of Mental Illness; Nathaniel Branden, author and psychotherapist, known for his work in the psychology of self-esteem; John Hospers, USC philosophy professor; Robert LeFevre, author, TV/radio broadcaster and founder of Rampart College; Los Angeles Police Chief Tom Redden, who, despite his conservative persona, spoke in support of lessening pot penalties; Sheriff of San Francisco Richard D. Hongisto "told the more than 500 people in attendance that police are ignoring enforcement and protection from violent crimes in order to go after easy drug bust arrests."
The Future of Freedom Conference (1977)
The first event actually named Future of Freedom Conference was held at USC in April 1977. It is best remembered for the turbulent debate between Prof. David Friedman, son of Milton Friedman, and SDS radical activist and later California state senator Tom Hayden. According to a 1980 Future of Freedom Conference brochure, "Tom Hayden was unaware of the libertarian philosophy. Mistaking Friedman for a conservative, Hayden attacked military spending and asked, 'What about the Pentagon?' Before Friedman could disagree, the audience roared "abolish the Pentagon! Shocked, Hayden paused and quietly responded 'Well, we must have a Pentagon.'" Hayden accused Friedman of unfair debate tactics. After a few hostile questions from the audience, Hayden walked off the stage, confused and shaken. "Amazingly, few persons when asked could agree on who won the debate. Hayden lost on substance, but Friedman's 'go for the throat' debate tactics backfired." Warren Olney IV, Channel 4 (NBC) newscaster, moderated the Friedman vs. Hayden debate.
Other speakers included the following:
Pavel Litvinov, Soviet Union dissenter; Poul Anderson, science fiction author and winner of seven Hugo Awards and three Nebula Awards; Dr. Nathaniel Branden, author and psychotherapist; Jerome Tuccille, futurologist and author of It Usually Begins With Ayn Rand; John Hospers, USC philosophy professor and author of Libertarianism - A Political Philosophy for Tomorrow; Jack J. Matonis, tax-resistance attorney. Karl Bray, tax resister and one of the founders of the Libertarian Party (United States); Robert LeFevre, author, TV/radio broadcaster and founder of Rampart College; Hank Hohenstein, author and tax strategist; David Bergland, Libertarian vice-presidential candidate.
The California Libertarian Alliance sponsored 1977's Future of Freedom Conference.
The Future of Freedom II: The 1980's: Freedom or Slavery? (1980)
The Future of Freedom II: The 1980s: Freedom or Slavery was held at Cypress College April 19–20, 1980, with a banquet at the Buena Park Holiday Inn. Main speakers included the following:
Karl Hess, speechwriter for Senator Barry Goldwater, market anarchist, and author of Dear America; Robert Anton Wilson, author of the Illuminatus! trilogy; John Hospers, USC philosophy professor; Prof. Arthur B. Laffer, economist and originator of the "Laffer Curve"; John Matonis, tax-resistance attorney; J. Neil Schulman, science fiction writer of Alongside Night; David Bergland, attorney and Vice Presidential candidate in 1976 on the Libertarian Party ticket; Anthony Hargis, author and business entrepreneur; John Pugsley, investment advisor and author of best-seller Common Sense Economics; Linda Abrams, constitutional attorney and member of the Rampart Institute board; Prof. Bob McGinley, alternative lifestyles psychologist; Sandy Shakocius (a.k.a. Sandy Shaw), life-extensionist and biochemist; Shawn Steel, a founder of the Future of Freedom Conference Series; Carl Nicolai, electronics designer and inventor; Kenneth Grubbs, Jr., editorial editor of The Register in Orange County, and Janice Allen, Libertarian Party activist, emceed the event.
The Saturday banquet paid tribute to libertarian pacifist, author, TV/radio broadcaster, and founder of Rampart College, Robert LeFevre, who received the Future of Freedom Award. Another award, the Ludwig von Mises Merit of Honor Award was presented to Dana Rohrabacher, one of the early organizers of the Future of Freedom Conference series.
A film festival included For a New Liberty, Libra, The Inflation File, and Theo Kamecke directed The Incredible Bread Machine.
Debates pitted notable opposites, including the following:
Lowell Ponte, radio commentator and book reviewer for the Los Angeles Times, debated Jon Wiener, left-leaning history professor. George H. Smith, author, Objectivist and atheist debated Jeffrey Johnson, conservative Catholic. Samuel Konkin III, author, agorist and market anarchist debated Manny Klausner, attorney and Libertarian Party leader.
1980's The Future of Freedom II: The 1980s: Freedom or Slavery? conference was organized by Lawrence Samuels, founder of Society for Libertarian Life, president of Rampart Institute and owner of Athena Graphics, plus Jane Heider-Samuels, board member of Rampart Institute; and Howard Hinman, editor of Society for Libertarian Life newsletter Libertas Review: A Journal of Peace and Liberty. The conference was sponsor by the Society for Libertarian Life, Cypress College Libertarian Club, California Libertarian Alliance, and Society for Individual Liberty. Athena Graphics in Santa Ana provided the graphics.
The FOF Conference: The Technology of Freedom (1981)
The Future of Freedom Conference: The Technology of Freedom, held at California State University, Long Beach (CSULB student union and Soroptimist House) May 8–10, 1981, drew an estimated crowd of 500. Main speakers included the following:
Karl Hess, speechwriter for Senator Barry Goldwater and market anarchist; John Hospers, USC philosophy professor; Timothy Leary, psychologist, writer, advocate of psychedelic drugs and coauthor of The Psychedelic Experience; Robert LeFevre, author, TV/radio personality, founder of Rampart College and libertarian pacifist; Irwin Schiff, author, tax protester and author of The Biggest Con: How the Government Is Fleecing You; Dennis Brown, California Assemblyman (R-Los Alamitos); Frank E. Fortkamp, professor of educational administration; Prof. David Friedman, anarcho-capitalist, physicist, economist and author of The Machinery of Freedom; Allan E. Harrison, author and educator; Samuel Edward Konkin III, agorist, market anarchist and author of New Libertarian Manifesto; John Joseph Matonis, tax-resistance attorney; Carl Nicolai, Electronic Engineer and inventor; Lowell Ponte, radio commentator and book reviewer for Los Angeles Times; Robert W. Poole, Jr., founder of the Reason Foundation; Fred Schnaubelt, San Diego city council member; Prof. Joyce Shulman, psychotherapist; Prof. Lee M. Shulman, clinical psychologist; George H. Smith, atheist, Objectivist and author of Atheism: The Case Against God; Shawn Steel, attorney and a founder of the Future of Freedom Conference.
Dr. Demento (Barry Hansen) performed as a conference highlight. Demento is famous for his KMET-syndicated radio show from Hollywood, California, "The Dr. Demento Show." A self-described libertarian, Dr. Demento specializes in broadcasting novelty songs, comedy, and strange or unusual recordings.
At the Friday night banquet, Lawrence Samuels, who co-managed the Future of Freedom Committee, presented the Future of Freedom Award to USC Philosophy Professor and the first U.S. presidential candidate for the Libertarian Party, John Hospers, for his achievements in promoting liberty. Co-founders of the Society for Individual Liberty, Don Ernsberger and Dave Walker, hosted a short color slide show of the early libertarian years dating back to the 1960s. Other speakers presented at the banquet, including the following:
Bill Susel; Robert Poole, Jr., editor-in-chief of Reason magazine; Shawn Steel, attorney; Manny Klausner, attorney and co-founder of the Reason Foundation; Leonard Liggio, president of the Institute for Humane Studies, classical liberal author and research professor.
On May 9 there was also a Society for Libertarian Life Reaffirming Liberty mini-convention in conjunction with the Future of Freedom Conference, with Robert LeFevre and Jack Matonis.
1981's The Future of Freedom Conference: The Technology of Freedom committee was co-managed by Lawrence Samuels and Kenneth Gregg; Terry Diamond was assistant manager, Jane Heider-Samuels was treasurer. Other staff included Kim Brogan-Grubbs, Howard Hinman, Pam Maltzman, Samuel Edward Konkin III, David Stevens, Charles Curley, Don Cormier, Bruce Dovner and Tim Blaine. The conference was sponsored by Rampart Institute and the California State University Long Beach (CSULB) Students for Rational Individualism, with co-sponsors Society for Libertarian Life, Society for Individual Liberty, Libertarian Supper Club of Orange County, First Libertarian Church of Los Angeles, and the Libertarian Law Council. Lawrence Samuels' Athena Graphics in Santa Ana provided graphics.
The Future of Freedom Conference (1982)
1982's The Future of Freedom Conference was held at California State University, Long Beach (CSULB), and Long Beach Holiday Inn on October 1–3, 1982. Main speakers included the following:
Thomas Szasz, Professor of Psychiatry at the Upstate Medical Center in Syracuse and author of The Myth of Mental Illness; Doug Casey, best-selling author and economist; Robert LeFevre, author, TV/radio broadcaster and founder of Rampart College; Gary Hudson, aerospace engineer and designer of the Percheron 055, the first private space launcher in the U.S.; Jack Matonis, tax-resistance attorney; Wendy McElroy, author and individualist feminist; John Hospers, USC philosophy professor; Barbara Branden, author of The Passion of Ayn Rand; Jeff Riggenbach, journalist, author, and broadcaster; John Pugsley, author of Common Sense Economics; Dr. Nathaniel Branden, psychologist, psychotherapist, former associate of novelist Ayn Rand, and author of The Psychology of Self-Esteem; E. Devers Branden, researcher at the Biocentric Institute; Thomas Hazlett, economist and writer.
Roy Begley emceed.
There were two noteworthy debates. First, author, atheist, and Objectivist George H. Smith debated Thomas Bartman, president of the Los Angeles City Board of Education, on "Should Public Education be Abolished?" Second, Ph.D. Candidate in History at the University of Texas Jeffrey Rogers Hummel debated Prof. David Friedman, author of The Machinery of Freedom on "Should America have a Military Force for Defense?"
The Friday Night Banquet paid "Tribute to Dr. Nathaniel Branden." Presenters included David Bergland, attorney and Libertarian Party activist; Roger Callahan, psychologist; and Manny Klausner, attorney and co-founder of the Reason Foundation.
The Free Press Association (founded in 1981) presented the H.L. Mencken Awards, emceed by journalist, author, and broadcaster, Jeff Riggenbach. Presenters of the awards included the following:
Dyanne Peterson, associated with the Center of Libertarian Studies; Alan Bock, editorial writer of The Register; Wendy McElroy, contributing editor of the New Libertarian and libertarian feminist; Robert LeFevre, TV/radio broadcaster and founder of Rampart College; Christine Dorffi, free-lance journalist.
Author and psychotherapist Dr. Nathaniel Branden accepted the Roy Child's Mencken Award for Best Editorial, presented by Robert LeFevre.
1982's The Future of Freedom Conference committee was co-managed by Lawrence Samuels and Terry Diamond, with Treasurer Jane Heider-Samuels, and Advertising Director Melinda M. Hanson. Other committee members included Don Cormier, Bruce Dovner, Howard Hinman, Tom Jones and Pam Maltzman.
The conference was sponsored by CSULB Students for Rational Individualism and Rampart Institute, with co-sponsors Society for Libertarian Life, Society for Individual Liberty, Libertarian Supper Club of Orange County and the Libertarian Law Council. Graphics were provided by Lawrence Samuels' Athena Graphics in Santa Ana.
The Future of Freedom Conference (1983)
1983's The Future of Freedom Conference was held at Long Beach City College and the Long Beach Holiday Inn on October 21–23, 1983. Main speakers included the following:
Barbara Branden, author of the forthcoming biography The Passion of Ayn Rand; Karl Hess, speechwriter for Senator Barry Goldwater and market anarchist; Irwin Schiff, tax resister and author of The Biggest Con: How the Government is Fleecing You; Butler D. Shaffer, Southwestern University law professor in Los Angeles; Henry Mark Holzer, constitutional lawyer and teacher at the Brooklyn Law School; Robert Poole, Jr., editor-in-chief of Reason magazine; Ben Sasway, the first draft resister jailed since the Vietnam War; George H. Smith, author, atheist, and Objectivist; Lee and Joyce Shulman, psychologists; Lowell Ponte, radio commentator and book reviewer for Los Angeles Times; Wendy McElroy, author of Freedom, Feminism and the State.
The conference was emceed by Tom Cobb and Mike Moon.
One of the best-attended events was the panel on "The Nature of Justice" by three heavyweights of the libertarian movement:
Murray Rothbard, anarcho-capitalist and Professor of Economics at Brooklyn Polytechnic; Robert LeFevre, libertarian pacifist, founder of Rampart College and author of The Nature of Man and His Government; John Hospers, USC philosophy professor and first candidate to run for President on the Libertarian Party ticket.
Friday night's Freedom Film Festival, emceed by Tom Cobb, showed Ayn Rand's The Fountainhead, plus the Oscar-winning short film Karl Hess: Toward Liberty.
The Saturday Night Banquet paid A Tribute to Murray Rothbard. Rothbard, Professor of Economics at Brooklyn Polytechnic in New York and libertarian political theorist, was presented with the Future of Freedom Award. The banquet was emceed by Wendy McElroy and included the following presenters:
George H. Smith, author, atheist, and Objectivist; Jeffery Rogers Hummel, contributing editor of Free Texas and Ph.D. Candidate in History at the University of Texas; Dr. Jack High
Jeff Riggenbach emceed the H.L. Mencken Awards.
Presenters of individual awards included the following:
Robert Poole, Jr., a founder of the Reason Foundation; L. Susan Brown, free-lance writer and staff member of the World Research Institute (later a professor of anthropology at Florida Atlantic University); Ken Grubbs, Jr., editorial editor of The Register in Orange County.
1983's The Future of Freedom Conference committee members were Lawrence Samuels, Jane Heider-Samuels, Melinda Hanson, and Terry Diamond. Dave Stevens was Floor manager. Staff included Rose Bittick, Peggy Nytes, Rod Boyer, Dean Steenson, Irene Shannon, Michael Kember, Tim Kuklinsky, Carol Moore, L.K. O'Neal, Dan Twedt, Sandy Sisson, David Anderson, John Robertson, Karen Dominguez, and Dave Klaus. Lawrence Samuels' Athena Graphics in Santa Ana provided graphics.
The Future of Freedom Conference (1984)
Held at the California State University, Long Beach on October 19–21, 1984, the keynote speaker was attorney and senior editor to Reason magazine Manny Klausner. Other speakers included:
Sandy Shaw, life-extensionist and biochemist; Jay Snelson, founder of the Free Market Society, lecturer and educator; Tibor Machan, professor of philosophy and author of The Pseudo-Science of B.F. Skinner; Barry Reid, founder of Eden Press; Leonard Liggio, research professor of law and one of the founders of the journal Left and Right: A Journal of Libertarian Thought; Edith Efron, New York Times Magazine journalist, correspondent for Time and Life magazine and author of The News Twisters; George H. Smith, author of Atheism: The Case Against God; Tom Hazlett, professor of economics at the University of California Davis; Robert LeFevre, author, radio/TV personality and libertarian pacifist; Bernard Siegan, distinguished professor of law and author of Land Use Without Zoning; John Hospers, USC philosophy professor and editor of The Monist (1982-1992); Jack Wheeler, freelance adventurer and philosophy professor.
Friday night's Tribute to Ayn Rand banquet featured two speakers honoring the famed novelist, philosopher, playwright, and screenwriter: Barbara Branden, writer and Ayn Rand confidante, and Ruth Beebe Hill, journalist and author of Hanta Yo. Objectivist and FOF Conference Co-manager Terry Diamond emceed.
The H. L. Mencken Awards were presented by the Free Press Association, emceed by journalist, author, and broadcaster Jeff Riggenbach. The following people presented the awards:
Michael Grossberg, arts reporter, theater critic and founder of the Free Press Association; Alan Bock, Orange County Register editorial writer; John Dentinger, contributor to Playboy and Reason magazines; Christine Dorffi, Reason magazine contributor
There was a Saturday night film festival starting with Monty Python's Life of Brian, hosted by author and singer-composer Craig Franklin, and Mike Hall, Hollywood film-maker and national Libertarian Party leader.
1984's The Future of Freedom Conference steering committee was co-managed by Lawrence Samuels and Terry Diamond, with Treasurer Jane Heider-Samuels, Charles Curley, Melinda Hanson, and Howard Hinman. Staffers included Dean Steenson, Bruce Dovner, Michael Kember, Dan Twedt, Sandy Sisson, Carol Moore, Dave Stevens, Tim Kuklinsky, Janis Hunter, Marje Spencer, and Caroline Roper-Deyo.
Rampart Institute, Society for Libertarian Life, and Cal State University, Long Beach Philosophy Association co-sponsored 1984's The Future of Freedom Conference. Lawrence Samuels' Athena Graphics in Santa Ana provided graphics.
The Future of Freedom Conference (1985)
With science fiction author Fahrenheit 451, Ray Bradbury, highlighting the event, the 1985 FOF Conference was held at the Griswold Inn in Fullerton, California on Oct. 25, 26, 27 with "300 or so faithful libertarians.
Main speakers included the following:
Karl Hess, speechwriter for Senator Barry Goldwater and market anarchist; Jeff Riggenbach, journalist, author, and broadcaster; Scott McKeown, West Coast director of the Guardian Angels, a civilian crime-fighting group; Robert Poole, Jr., one of the founders and editor-in-chief of Reason magazine; Jeffery Roger Hummel, contributing editor of Free Texas and Ph.D. Candidate in History at the University of Texas; Linda Abrams, constitutional attorney and member of the Rampart Institute board; David Ramsay Steele, former member of the Socialist Party of Great Britain and co-founder of the Libertarian Alliance in England; Wendy McElroy, author and individualist feminist; Robert LeFevre, founder of Rampart college and author of The Nature of Man and His Government; Barry Reid, founder of Eden Press; Dr. Robert Simon, Assistant Director of Emergency Medicine Residency at the University of California at Los Angeles.
Debate: One of the most talked about events was a debate between a former member of the Socialist Party of Great Britain and co-founder of the Libertarian Alliance in England, David Ramsay Steele, and author, Objectivist, and atheist George H. Smith, on "Natural Rights: Do They Exist?" Moderated by the editorial-page editor of the Orange County Register, Alan Bock.
Saturday night's banquet featured the Future of Freedom Award: Tribute to Karl Hess. A former editor of Newsweek and speechwriter for Senator Barry Goldwater and Vice President Nixon, Hess authored the 1969 award-winning Playboy article, "The Death of Politics."
Presenters were Robert LeFevre, author of The Fundamentals of Liberty and Rampart College founder; John Pugsley, author of Common Sense Economics; and Alan Bock, editorial editor of the Orange County Register.
The H. L. Mencken Awards - once referred to by Robert LeFevre as the "Libertarian Academy Award Show" or the "Menckies," - were presented by Free Press Association, co-hosted by arts reporter, theater critic and founder of the Free Press Association Michael Grossberg, and by journalist, author, and broadcaster, Jeff Riggenbach. The winners were as follows:
David R. Henderson, Professor of Economics, Best News Story or Investigative Report for "The Myth of MITI"; Asa Barber, Best Feature Story or Essay for "Killing Us Softly With Their Song", published by Playboy magazine in 1984; Seymour Hersh, Best Book for The Price of Power: Kissinger in the Nixon White House, published by Summit Books; Sudha Shenoy, Best Editorial or Op-Ed Column for "Saving Wild Animals," distributed by the Institute for Human Studies.
The film festival included the following:The Atomic Cafe; Ayn Rand's Last TV Interviews (Phil Donahue Show, 1979, and Tom Snyder's Tomorrow, (1980); Spartacus; Harry's War; Fahrenheit 451; The Scarecrow of Romney; Moscow on the Hudson; Rock N' Roll High School; Sleeper; Duck Soup; The Fountainhead; a documentary with short TV interviews of Robert Ringer, Tibor Machan, Murray Rothbard and Ed Clark; six episodes of the TV series The Hitchhiker's Guide to the GalaxyPaul Jacob from Arkansas was scheduled to speak at the last-minute but had to cancel his speech. "Instead, he was convicted last July in federal court in Little Rock, Arkansas for failure to register with the Selective Service..." and "...was sentenced to six months in prison..." With a battered cassette player held high up to the microphone, conference manager Lawrence Samuels played the voice of draft resister Paul Jacob. The L.A. Times wrote that with the "shackled, outstretched hand-breaking the chain that had restrained it" (The Future of Freedom Conference logo) in the background, the "conference couldn't have asked for a more evocative image." The L.A. Times article also quoted Karl Hess definition of libertarianism as an ideology that simply states: "Thou shalt not aggress."
1985's The Future of Freedom Conference Steering Committee was Lawrence Samuels, manager; Michael Grossberg, banquet and workshop coordinator, Ken Royal, Terry Diamond, Jane Heider-Samuels, Charles Curley, Melinda Hanson, and Howard Hinman. Danny Tvedt and Dave Meleny video and audio taped the proceedings. Staffers included Michael Kimberly, Chris Hofland, Dagney Sharon, Marc Walozk, Linda Samuels, John Robertson, Sandra Lee, Sarah Foster, Tom Thomas and Henry and Rosemary Samuels. Rampart Institute and Society for Libertarian Life co-sponsored the conference, and Lawrence Samuels' Athena Graphics in Santa Ana provided graphics.
The Future of Freedom Conference (1986)
1986's The Future of Freedom Conference was held at Pacific Hotel and Conference Center in Culver City, California, on November 7–9, 1986.
Speakers in Room 1 included the following:
Durk Pearson and Sandy Shaw, authors of Life Extension: A Practical Scientific Approach; Carol Moore, anti-war and war tax resistance activist; John Pugsley, author of best-seller Common Sense Economics and The Alpha Strategy: The Ultimate Plan of Financial Self-Defense for the Small Investor; Richard J. Maybury, author and economist; Vince Miller, founder of Libertarian International, later to become known as the International Society for Individual Liberty (ISIL); Fred Stitt, architect and editor of Guidelines newsletter; Richard B. Boddie, lawyer, adjunct professor in political science, and writer; Marshall Fritz, founder of Advocates for Self-Government and Alliance for the Separation of School and State; Alicia Clark, former national chair of the Libertarian Party; Jay Snelson, founder of the Free Market Society, lecturer and educator; Barbara Branden, a close confidant and author of The Passion of Ayn Rand; Prof. Joyce Shulman, psychotherapist; Prof. Lee M. Shulman, clinical psychologist; Kevin Cullinane, instructor for the Freedom Country seminars in South Carolina; Linda Abrams, constitutional attorney and member of the Rampart Institute board; Dr. Camille Castorina, associate professor of economics at Florida Institute of Technology; Charlotte Gerson.
The following people were members of a panel discussions on sex and freedom:
Norma Jean Almodovar, former policewoman turned prostitute and a sex workers activist; Richard B. Boddie, lawyer, adjunct professor in political science and writer; Jeffrey Rogers Hummel, contributing editor of Free Texas and Ph.D. Candidate in History at the University of Texas; Fred Stitt, architect and editor of Guidelines.Friday night's banquet debate pitted President Reagan's senior speech writer Dana Rohrabacher against David Bergland, the 1984 Libertarian Party presidential candidate. The ensuing panel discussion on defense and foreign affairs included the following:
Kevin Cullinane, the instructor for the Freedom Country seminars in South Carolina; John Hospers, USC professor of philosophy; Robert Poole, Jr., one of the founders and editor-in-chief of Reason magazine; Jeffrey Rogers Hummel, contributing editor of Free Texas and Ph.D. Candidate in History at the University of Texas
Speakers in Room 2 included the following:
Jack Matonis, tax-resistance attorney and editor/publisher of The Newsletter for Citizens Strike; Ron Holland, financial expert, Austrian economist and author of The Threat to the Private Retirement System; Samuel E. Konkin III, agorist and market anarchist; Tonie Nathan, journalist, market consultant, and the first woman and first Jew to receive an electoral vote in a United States presidential election (1972); Tom Hazlett, professor of economics at UC Davis; John Hospers, USC philosophy professor and author of Libertarianism - A Political Philosophy for Tomorrow; Fred Stitt, architect and editor of Guidelines; Gary Hudson, aerospace engineer and designer of the Percheron 055, the first private space launcher in the U.S.; Walter Block, director of the Centre for the Study of Economics and Religion at the Fraser Institute in Canada and anarcho-libertarian theorist; Spencer H. MacCallum, social anthropologist, business consultant and author; Dennis Kamensky, Oakland Tribune columnist and author of Winning on Your Income Taxes; Mark A. Humphrey.
Panels in Room 2 included the following:
Are Religion and Libertarianism Compatible?Alan Bock, Orange County Register editorial writer; John Yench, journalist for Freedom Newspaper, Inc.; Marshall Fritz, founder of Advocates for Self-Government and Alliance for the Separation of School and State; Butler D. Shaffer, Southwestern University law professor in Los Angeles; Robert Poole, editor-in-chief of Reason magazine and author of Cutting Back City Hall.Another panel focused on doctors, lawyers, victims and the Justice System
Ed Clark, Harvard Law School graduate, Libertarian Party candidate for U.S. president in 1980 and author of A New Beginning; Charlotte Gerson, an anesthesiologist on the staff at St. Luke's Hospital in San Gabriel, CA; Don Eric Franzen, a partner in a Los Angeles law firm specializing in constitutional law; Lewis Coleman.Jury Nullification and Pro Se: Freedom or FollyAttorney and Rampart Institute board member Dick Radford debated Bob Hallstrom, co-founder of the Barrister's Inn and sovereign citizen advocate.
Panel Presentations in Room 3 included the following:Computers and Small Business EnterprisesKarl Hess, coordinator; Regina Liudzius, business litigation attorney; Jeff Riggenbach, journalist, author, and broadcaster; Alan Bock, Orange County Register editorial writer; John Dentinger, contributor to Playboy and Reason magazines; Jeffery Rogers Hummel, contributing editor of Free Texas and Ph.D. Candidate in History at the University of Texas; Don Ernsberger and David Walter, co-founders of Society for Individual Liberty; Shawn Steel, attorney; Bob Hallstrom, sovereign citizen advocateFreeing the Terran Five BillionMark Eric Ely-Chaitelaine, a recent graduate from the University of Science and Philosophy in Virginia; Dagny Sharon, paralegal mediator; John Yench, journalist for Freedom Newspaper, Inc.; Chuck Hammill, Mensa member and author of From Crossbows to Cryptography: Thwarting the State Via Technology; Wayne Stimson
1986's The Future of Freedom Conference committee manager was Dagny Sharon, with assistance from Lawrence Samuels.
The Summit87 and Future of Freedom Conference (1987)
Called the Summit87 & FOFCON, the conference was held at the Pacific Hotel in Culver City, California November 13–15, 1987. Main speakers included the following:
Marshal Fritz, founder/president of Advocates for Self-Government; David Bergland, law professor, attorney and author of Libertarianism in One Lesson; Barbara Branden, a close Rand confidante and author of The Passion of Ayn Rand; Peter Breggin, psychiatrist, novelist, and author of scientific books; L. Neil Smith, author of 13 science fiction novels, including The Probability Broach; Phillip Mitchel, author and clinical psychologist.
1987's Summit87 & FOF CON committee was managed by Marshall Fritz and sponsored by Advocates for Self-Government.
The Future of Freedom and ISIL's 5th World Libertarian Conference (1990)
Sponsored by the International Society for Individual Liberty (ISIL), the conference was held in San Francisco August 10–14, 1990. The keynote speaker was the 1976 Nobel-winning economist Milton Friedman, who delivered a speech on libertarianism and humility titled Say 'No' to Intolerance, arguing that, "I have no right to coerce someone else, because I cannot be sure that I'm right and he is wrong." Texas Congressman Dr. Ron Paul was another speaker.
Other speakers included the following:
Barbara Branden, a close Rand confidante and author of The Passion of Ayn Rand; Leon Louw, author and twice a Nobel Peace Prize nominee for his work to end Apartheid and defuse racial conflict in South Africa; Frances Kendall, co-author of two best-selling South African books; Richard L. Stroup, free-market environmentalist, professor of economics and director of the Office of Policy Analysis at the Department of Interior during the Reagan administration; Jane S. Shaw, journalist, environmentalist, and senior fellow of Property and Environment Research Center (PERC); Walter Block, director of the Centre for the Study of Economics and Religion at the Fraser Institute in Canada and anarcho-libertarian theorist and author of Defending the Undefendable; John Baden, co-author of Managing the Common and founder of Foundation for Research on Economics and the Environment (FREE); Enrique Ghersi, Peruvian lawyer, professor, free market intellectual and a member of the Peruvian Parliament; Carl I. Hagen, Norwegian Member of Parliament and Progress Party leader; Petr Beckmann, scientist; Marshall Fritz, founder of Advocates for Self-Government; George H. Smith, historian and author of Atheism: The Case Against God; Dr. Peter Breggin, psychiatrist; Dr. Martin Krause, Argentine economist; Leonard Liggio, president of the Institute for Humane Studies; Robert Poole, privatization pioneer and founder of Reason Foundation; Jonathan Marshall, journalist with the San Francisco Chronicle; Robert Smith, environmental policy expert with Cato Institute; Bruce Evoy, founder of the Libertarian Party of Canada; Frank van Dun, law professor in the Netherlands; Jason Alexander, author.
1990's The Future of Freedom and ISIL's 5th World Libertarian Conference was organized by the following people:
Vince Miller, president and co-founder of ISIL; Jim Elwood, vice president of ISIL; James Peron, co-author of Liberty Reclaimed: A New Look at American Politics''. Mr. Peron, the principal organizer of the event says it was not associated with Future of Freedom other than as a sponsor, along with Advocates for Self-Government and the ISIL conference. It was billed as the World Freedom Conference. Peron says, "As principal organizer with Vince's help, I planned the event. While ISIL, FofF and Advocates were asked to help promote the event they had no actual stake in the event." There was a Future of Freedom conference at Fort Mason, San Francisco one year earlier, however, which is not mentioned above.
ISIL was formed in 1989 by the merger of the Society for Individual Liberty, founded in 1969 by Jarret Wollstein, Dave Walter and Don Ernsberger, and Libertarian International, co-founded by Vince Miller in 1980.
References
External links
(Archive)
Conferences in the United States
1969 establishments in California
Libertarianism |
44978436 | https://en.wikipedia.org/wiki/National%20Cyber%20Security%20Centre%20of%20Lithuania | National Cyber Security Centre of Lithuania | The National Cyber Security Centre of Lithuania is a government internet security organization in Vilnius, Lithuania. The Centre is part of the Ministry of National Defence. It analyses the cyber security environment in Lithuania, protects national databases, manages internet operations of national organizations, prepares cyber security plans and investigates internet attacks.
The National Cyber Security Centre of Lithuania was established due increasing internet attacks against Lithuanian government organizations. Every year there are around 25,000 attacks and the number increases by 10-20% every year. The Centre was created in 2015 by reforming the Communications and Information Systems Service of Lithuania. The centre issues recommendations regarding safe usage of the internet and mobile apps.
In 2018 it warned and did not recommend to use Yandex.Taxi app due to its superfluous permission requirements and access to sensitive data and data storing on servers in Russia.
See also
National Cyber Security Centre (disambiguation) in other countries
Links
References
Internet in Lithuania
Organizations based in Vilnius
Communications and media organizations based in Lithuania
Law enforcement agencies of Lithuania
2015 establishments in Lithuania
Lithuania |
650156 | https://en.wikipedia.org/wiki/Source-available%20software | Source-available software | Source-available software is software released through a source code distribution model that includes arrangements where the source can be viewed, and in some cases modified, but without necessarily meeting the criteria to be called open-source. The licenses associated with the offerings range from allowing code to be viewed for reference to allowing code to be modified and redistributed for both commercial and non-commercial purposes.
Distinction from free and open-source software
Any software is source-available software as long its source code is distributed along with it, even if the user has no legal rights to use, share, modify or even compile it. It is possible for a software to be both source-available software and proprietary software.
In contrast, the definitions of free software and open-source software are much narrower. Free software and/or open-source software is also always source-available software, but not all source-available software is also free software and/or open-source software. This is because the official definitions of those terms require considerable additional rights as to what the user can do with the available source (including, typically, the right to use said software, with attribution, in derived commercial products).
Free and open-source licenses
Free software licenses and open-source software licenses are also source-available software licenses, as they both require the source code of the software to be made available.
Non-free licenses
The following source-available software licenses are considered non-free licenses because they have limitations that prevent them from being open-source according to the Open Source Initiative and free to the Free Software Foundation.
Commons Clause
The Commons Clause, created by Fossa, Inc., is an addendum to an open-source software license that restricts users from selling the software. Under the combined license, the software is source-available, but not open-source.
On August 22, 2018, Redis Labs shifted some Redis Modules from the Affero General Public License to a combination of the Apache License 2.0 and the Commons Clause.
In September 2018, Matthew Garrett criticized Commons Clause calling it an "older way of doing things" and said it "doesn't help the commons".
GitLab Enterprise Edition License (EE License)
The GitLab Enterprise Edition License is used exclusively by GitLab's commercial offering. GitLab also releases a Community Edition under the MIT License.
GitLab Inc. openly discloses that the EE License makes their Enterprise Edition product "proprietary, closed source code." However, the company makes the source code of the Community Edition public, as well as the repository's issue tracker, and allows users to modify the source code. The dual release of the closed-source Enterprise Edition and the open-source Community Edition makes GitLab an open core company.
Mega Limited Code Review Licence
In 2016, Mega Ltd. released the source code of their Mega clients under the Mega Limited Code Review Licence, which only permits usage of the code "for the purposes of review and commentary". The source code was released after former director Kim Dotcom stated that he would "create a Mega competitor that is completely open source and non-profit" following his departure from Mega Ltd.
Microsoft Shared Source Initiative
Microsoft's Shared Source Initiative, launched in May 2001, comprises 5 licenses, 2 of which are open-source and 3 of which are restricted. The restricted licenses under this scheme are the Microsoft Limited Public License (Ms-LPL), the Microsoft Limited Reciprocal License (Ms-LRL), and the Microsoft Reference Source License (Ms-RSL).
Old Scilab License
Prior to version 5, Scilab described itself as "the open source platform for numerical computation" but had a license that forbade commercial redistribution of modified versions. Versions 5 and later are distributed under the GPL-compatible CeCILL license.
Server Side Public License
The Server Side Public License is a modification of the GNU General Public License version 3 created by the MongoDB project. It adds a clause stating that if SSPL-licensed software is incorporated into a "service" offered to other users, the source code for the entirety of the service must be released under the SSPL. The license has been considered non-free by Debian and Red Hat (with software licensed under it therefore banned from their Linux distributions), as well as the Open Source Initiative, as it contains conditions that are unduly discriminatory towards commercial use of the software.
SugarCRM Public License
In 2007 Michael Tiemann, president of OSI, had criticized companies such as SugarCRM for promoting their software as "open source" when in fact it did not have an OSI-approved license. In SugarCRM's case, it was because the software is so-called "badgeware" since it specified a "badge" that must be displayed in the user interface. SugarCRM's open source version was re-licensed under the GPL version 3 in 2007, and later the GNU Affero GPL version 3 in 2010.
TrueCrypt License
The TrueCrypt License was used by the TrueCrypt disk encryption utility. When TrueCrypt was discontinued, the VeraCrypt fork switched to the Apache License, but retained the TrueCrypt License for code inherited from TrueCrypt.
The Open Source Initiative rejects the TrueCrypt License, as "it has elements incompatible with the OSD." The Free Software Foundation criticizes the license for restricting who can execute the program, and for enforcing a trademark condition.
BeeGFS End User License Agreement
BeeGFS EULA is the license of the distributed parallel file system BeeGFS, except the client for Linux, which is licensed under GPLv2.
BeeGFS source code is publicly available from their website, and because of this they claiming BeeGFS as "Open-Source" software; it is in fact not because this license prohibits distributing modified versions of the software, or using certain features of the software without authorization.
See also
Comparison of free and open-source software licenses
Free software
Free software license
List of commercial video games with available source code
List of proprietary source-available software
List of source-available video games
Open-core model
Open-source license
Open-source software
Shared Source Initiative
References
Software licenses |
2062026 | https://en.wikipedia.org/wiki/Neonode | Neonode | Neonode Inc ( is a Swedish company providing optical sensors for contactless touch, touch, gesture control, and remote sensing. Neonode technology is currently deployed in more than 80 million products and the company holds more than 100 patents worldwide. Neonode’s customer base includes Fortune 500 companies in the consumer electronics, office equipment, automotive, elevator and self-service kiosk markets. Neonode is a publicly traded company, headquartered in Stockholm, Sweden and established in 2001.
Mobile phone products
Neno
Neno was Neonodes custom graphical user interface (GUI) controlling the Microsoft Windows CE operating system. Neonode devices ran Neno from a removable Secure Digital card.
N2
Operating system: Windows CE 6.0 Pro
Display: 176 × 220 Pixels (Width × Height), 16-bit Colour TFT
Audio: Stereo 48 kHz playback
Memory Storage: miniSD / miniSDHC
Connectivity: GSM Quad band: GSM850, GSM900, GSM1800, GSM1900 MHz, Bluetooth, GPRS
Size: 47×77×14,7 mm (W×H×T)
Weight: 70g
Battery: 820mAh
Imaging: 2 Megapixel Fixed Focus Camera, Still pictures, Video play-back (MPEG, WMV), Video Recording (MPEG-4) (not yet, but planned in further updates)
Messaging: SMS, Support for long SMS, MMS, Predictive text input, T9 (English, Swedish, German, Dutch, Spanish, Norwegian, French and Greek), Call history (dialed, received and missed calls)
Audio: Audio player (MP3, WAV, WMA), stereo 48 kHz playback, Custom ring tones (MP3, WAV, WMA), Alarm application
Entertainment: Windows Media Player, Internet Explorer (web browser), 2 integrated games.
Third-party software.
Software development environment.
N1m
Launched: Q1 2005
Operating system: Windows CE 5.0 and 4.2
Display: 176 × 220 Pixels (Width × Height), 16-bit Colour TFT
Audio: Stereo 48 kHz playback
Memory storage: SD card up to 2 GB
Connectivity: GSM Triband: GSM900, GSM1800, GSM1900 MHz), USB 1.1
Size: 88 × 52 × 21 (H × W × D)
Weight: 94g
Imaging: Megapixel Camera (1024x1024), Imageviewer, Videoplayer (MPEG 1, MPEG 4, WMV)
Messaging: Short message service, Support for long SMS, Multimedia Messaging Service, Predictive text input, T9 (English, Swedish, German, Spanish, French and Russian), Call history (dialed, received and missed calls)
Audio: Audio player (MP3, WAV, WMA), stereo 48 kHz playback, Custom ring tones (MP3, WAV, WMA), Alarm application
Entertainment: Windows Media Player, Internet Explorer (web browser), Games, third-party software downloadable
Organiser: Calendar, Tasks, Phonebook with storage capacity of 1000 contacts, Add an individual picture for each contact, Synchronize with your PC using ActiveSync (requires extra software free of charge from Microsoft)
Miscellaneous: Large touch-screen, On-screen keyboard, Screen saver, Vibrator, Calculator, Updates available at the Neonode website, USB memory functionality, use N1 as portable storage device. A generic positive gesture (slide left to right) and negative gesture (slide right to left) were used consistently through the interface to unlock, go back to home screen, etc.
References
Mobile phone manufacturers
Telecommunications companies of Sweden
Telecommunications companies established in 2002
2002 establishments in Sweden
Technology companies disestablished in 2008
Bankrupt mobile phone companies
2008 disestablishments in Sweden |
17731312 | https://en.wikipedia.org/wiki/School%20of%20Engineering%2C%20CUSAT | School of Engineering, CUSAT | The School of Engineering is a college under Cochin University of Science and Technology, in Kochi (Cochin), Kerala, India. Established in 1979 for offering part-time M.Tech programmes. The school was the first in the country to introduce Information Technology as an engineering stream and is one among very few colleges in the country with a B. Tech course in Safety and Fire Engineering. The school is a Research Centre and major Consultancy Centre. A number of research projects of national importance have been sanctioned to the school by agencies like DRDO, ISRO, DST, AICTE, UGC, Coir Board, and the Coconut Development Board.
B.Tech programmes offered by the school have been accredited by the National Board of Accreditation under the Tier-I system. The board has been accorded permanent signatory status to the Washington Accord on 13 June 2014. As per the Washington Accord agreement, recognition of programmes by other signatories applies only to programmes accredited by the National Board of Accreditation that are offered by education providers accepted by the board as Tier-1 institutions.
Overview
The school started offering part-time M.Tech. programmes in civil, mechanical, electrical and chemical engineering for practicing engineers in and around Cochin. It introduced B.Tech. programmes in civil engineering, mechanical engineering, electronics and communication engineering, computer science and engineering and information technology in 1995, which turned out to be a milestone in the growth of the school. B.Tech. in safety and fire engineering was added in 1996 and electrical and Electronics engineering in 2003. Having more than 2600 students, the School of Engineering is the largest academic department of the university.
Dr George Mathew is the Principal of School of Engineering.
Admission
Admissions to SOE-CUSAT is based on an All India Entrance Examination known as Common Admission Test conducted by the university which includes papers for admission to undergraduate and postgraduate courses. The exam covers questions from mathematics, physics and chemistry and is announced by the university to enable candidates with a better understanding of the test pattern, types of questions that may be asked, marking scheme and more. The school has students from all over the country and abroad.
Rankings
The National Institutional Ranking Framework ranked it 178 among engineering colleges in 2020.
Courses
The school functions through its seven divisions: Civil, Mechanical, Electrical, Electronics, Computer Science, Safety and Fire, and Information Technology.
Undergraduate courses B.Tech (intake in brackets)
Civil Engineering (90)
Computer Science and Engineering (90)
Electrical and Electronics Engineering (60)
Electronics and Communication Engineering (90)
Information Technology (90)
Mechanical Engineering (90)
Safety and Fire Engineering (60)
Postgraduate course M.Tech.
Full Time M Tech Courses
Information Technology (Software Systems) (18)
Computer Science and Engineering (Networking Systems) (18)
Civil Engineering (Geo-technical Engineering) (18)
Mechanical Engineering (Thermal Engineering) (18)
Electronics & Communications Engineering (Wireless Technology) (18)
Health, Safety and Environment Management (18)
Part Time M Tech Courses
Civil Engineering (18)
Mechanical Engineering (18)
Chemical Engineering (18)
Electrical Engineering (18)
Ph.D programmes All branches mentioned above.
Student life
With all amenities provided within the campus itself, students seldom leave the campus. To ensure the effective implementation of the verdict given by Supreme Court of India, the school has set up anti-ragging committees at four levels.
Annual Festivals
Scienza Annual Science Fest
Scienza is a science festival which falls in the science month, February, and is organised with the purpose of igniting student interest in areas associated with technology.
The organisation of entire event has been funded and supported by ISSE, PTA and faculty members of School Of Engineering.
Dhishna
Dhishna is the annual techno-management fest of the school. It includes competitions, paper presentations, exhibitions, quizzes, model displays and robotics events.
Xplendor was the annual techfest of the School of Engineering. It was revamped and renamed Dhishna in 2011 and encompasses a wide spectrum of activities and is managed entirely by students. Xplendor was usually organized in February. The first Xplendor was conducted in 2010, with 34 events including guest lectures, workshops, and musical night, and attracted 30 colleges from all over India. The winners of the events shared prize money of INR 2 lakhs.
Vipanchika
Vipanchika is the arts and cultural festival of the school. It is a three-day festival which encompasses various arts and cultural activities, which mainly differentiated as on stage and off stage events. It is held in winter semester.
Sargam
Sargam is the arts and cultural festival of the Cochin University of Science and Technology. It is a five-day festival comprising various arts and cultural activities, ON STAGE and OFF the STAGE events.
Student Organisations
The Institute of Electrical and Electronics Engineering (IEEE)
It is the world's largest technical society. A student branch of the IEEE was established in the university in 1986 as the 4th Student Branch Chapter in IEEE Kerala section after CET, NSSCE Palakkad and TKMCE Kollam. Majority of the members are undergraduates from School of Engineering.
The IEEE student members receive all the technical and professional benefits of IEEE membership at subsidized rates. The branch is supervised by a Branch Counsellor who is a faculty member of the university having an IEEE membership. The student branch organizes activities such as seminars, and invited speeches by technical experts.
Indian Green Building Council (IGBC) Student Chapter
The student chapter was inaugurated on 14 March 2018 by then Hon. Vice Chancellor Dr. J Latha in the presence of Chairman IGBC Kochi Chapter, Mr. B R Ajith. The student chapter has an active participant from students of all departments of SOE in all the activities such as seminars, speeches by technical experts, competitions etc.
See also
Indian Institute of Technology Palakkad
National Institute of Technology Calicut
National Institute of Technology, Tiruchirappalli
National Institute of Technology Karnataka
Kunjali Marakkar School of Marine Engineering
CUCEK
References
External links
Official website
Engineering colleges in Kochi
Cochin University of Science and Technology
1979 establishments in Kerala
Educational institutions established in 1979 |
14048720 | https://en.wikipedia.org/wiki/Pg%20%28Unix%29 | Pg (Unix) | pg is a terminal pager program on Unix and Unix-like systems for viewing text files. It can also be used to page through the output of a command via a pipe. pg uses an interface similar to vi, but commands are different.
As of 2018, pg has been removed from the POSIX specification, but is still included in util-linux. Users are expected to use other paging programs, such as more, less or most.
History
pg is the name of the historical utility on BSD UNIX systems. It was written to address the limit of the historical more command not being able to traverse the input backward. Eventually that ability was added also to more, so both are quite similar.
References
See also
less
more
most (Unix)
Terminal pagers |
41826 | https://en.wikipedia.org/wiki/Trusted%20computing%20base | Trusted computing base | The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy.
The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.
Definition and characterization
The term trusted computing base goes back to John Rushby, who defined it as the combination of operating system kernel and trusted processes. The latter refers to processes which are allowed to violate the system's access-control rules.
In the classic paper Authentication in Distributed Systems: Theory and Practice Lampson et al. define the TCB of a computer system as simply
a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.
Both definitions, while clear and convenient, are neither theoretically exact nor intended to be, as e.g. a network server process under a UNIX-like operating system might fall victim to a security breach and compromise an important part of the system's security, yet is not part of the operating system's TCB. The Orange Book, another classic computer security literature reference, therefore provides a more formal definition of the TCB of a computer system, as
the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.
In other words, trusted computing base (TCB) is a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy.
The Orange Book further explains that
[t]he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.
In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In operating systems, this typically consists of the kernel (or microkernel) and a select set of system utilities (for example, setuid programs and daemons in UNIX systems). In programming languages that have security features designed in such as Java and E, the TCB is formed of the language runtime and standard library.
Properties
Predicated upon the security policy
As a consequence of the above Orange Book definition, the boundaries of the TCB depend closely upon the specifics of how the security policy is fleshed out. In the network server example above, even though, say, a Web server that serves a multi-user application is not part of the operating system's TCB, it has the responsibility of performing access control so that the users cannot usurp the identity and privileges of each other. In this sense, it definitely is part of the TCB of the larger computer system that comprises the UNIX server, the user's browsers and the Web application; in other words, breaching into the Web server through e.g. a buffer overflow may not be regarded as a compromise of the operating system proper, but it certainly constitutes a damaging exploit on the Web application.
This fundamental relativity of the boundary of the TCB is exemplified by the concept of the 'target of evaluation' ('TOE') in the Common Criteria security process: in the course of a Common Criteria security evaluation, one of the first decisions that must be made is the boundary of the audit in terms of the list of system components that will come under scrutiny.
A prerequisite to security
Systems that don't have a trusted computing base as part of their design do not provide security of their own: they are only secure insofar as security is provided to them by external means (e.g. a computer sitting in a locked room without a network connection may be considered secure depending on the policy, regardless of the software it runs). This is because, as David J. Farber et al. put it, [i]n a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers. As far as computer security is concerned, reasoning about the security properties of a computer system requires being able to make sound assumptions about what it can, and more importantly, cannot do; however, barring any reason to believe otherwise, a computer is able to do everything that a general Von Neumann machine can. This obviously includes operations that would be deemed contrary to all but the simplest security policies, such as divulging an email or password that should be kept secret; however, barring special provisions in the architecture of the system, there is no denying that the computer could be programmed to perform these undesirable tasks.
These special provisions that aim at preventing certain kinds of actions from being executed, in essence, constitute the trusted computing base. For this reason, the Orange Book (still a reference on the design of secure operating systems ) characterizes the various security assurance levels that it defines mainly in terms of the structure and security features of the TCB.
Software parts of the TCB need to protect themselves
As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the von Neumann architecture implemented by virtually all modern computers: since machine code can be processed as just another kind of data, it can be read and overwritten by any program barring special memory management provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to.
In many modern CPUs, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the memory management unit (MMU), which is programmable by the operating system to allow and deny access to specific ranges of the system memory to the programs being run. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called supervisor mode; compared to more crude approaches (such as storing the TCB in ROM, or equivalently, using the Harvard architecture), it has the advantage of allowing the security-critical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own.
Trusted vs. trustworthy
As stated above, trust in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is “trusted” first and foremost in the sense that it has to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests of the practical limits of such trust.
The alternative is formal software verification, which uses mathematical proof techniques to show the absence of bugs. Researchers at NICTA and its spinout Open Kernel Labs have recently performed such a formal verification of seL4, a member of the L4 microkernel family, proving functional correctness of the C implementation of the kernel.
This makes seL4 the first operating-system kernel which closes the gap between trust and trustworthiness, assuming the mathematical proof is free from error.
TCB size
Due to the aforementioned need to apply costly techniques such as formal verification or manual review, the size of the TCB has immediate consequences on the economics of the TCB assurance process, and the trustworthiness of the resulting product (in terms of the mathematical expectation of the number of bugs not found during the verification or review). In order to reduce costs and security risks, the TCB should therefore be kept as small as possible. This is a key argument in the debate preferring microkernels to monolithic kernels.
Examples
AIX materializes the trusted computing base as an optional component in its install-time package management system.
See also
Black box
Orange Book
Trust anchor
Hardware security
References
Computer security procedures |
31164227 | https://en.wikipedia.org/wiki/2010%E2%80%9311%20Arkansas%E2%80%93Little%20Rock%20Trojans%20men%27s%20basketball%20team | 2010–11 Arkansas–Little Rock Trojans men's basketball team | The 2010–11 Arkansas–Little Rock Trojans men's basketball team represented the University of Arkansas at Little Rock during the 2010–11 NCAA Division I men's basketball season. The Trojans, led by 8th year head coach Steve Shields, played their home games at the Jack Stephens Center and are members of the Sun Belt Conference. They finished the season with a record of 19–17, 7–9 in Sun Belt play. They won the 2011 Sun Belt Men's Basketball Tournament to earn an automatic bid in the 2011 NCAA Men's Division I Basketball Tournament. They lost in the new First Four round to UNC Asheville in overtime.
Roster
Schedule
|-
!colspan=9| Sun Belt Conference Tournament
|-
!colspan=9| NCAA Tournament
References
Arkansas-Little Rock
Arkansas-Little Rock
Arkansas-Little Rock Trojans men's basketball team
Arkansas-Little Rock Trojans men's basketball team
Little Rock Trojans men's basketball seasons |
919313 | https://en.wikipedia.org/wiki/S%20%28programming%20language%29 | S (programming language) | S is a statistical programming language developed primarily by John Chambers and (in earlier versions) Rick Becker and Allan Wilks of Bell Laboratories. The aim of the language, as expressed by John Chambers, is "to turn ideas into software, quickly and faithfully".
The modern implementation of S is R, a part of the GNU free software project. S-PLUS, a commercial product, was formerly sold by TIBCO Software.
History
"Old S"
S is one of several statistical computing languages that were designed at Bell Laboratories, and first took form between 1975–1976. Up to that time, much of the statistical computing was done by directly calling Fortran subroutines; however, S was designed to offer an alternate and more interactive approach. Early design decisions that hold even today include interactive graphics devices (printers and character terminals at the time), and providing easily accessible documentation for the functions.
The first working version of S was built in 1976, and operated on the GCOS operating system. At this time, S was unnamed, and suggestions included ISCS (Interactive SCS), SCS (Statistical Computing System), and SAS (Statistical Analysis System) (which was already taken: see SAS System). The name 'S' (used with single quotation marks until 1979) was chosen, as it was a common letter in the suggestions and consistent with other programming languages designed from the same institution at the time (namely the C programming language).
When UNIX/32V was ported to the (then new) 32-bit DEC VAX, computing on the Unix platform became feasible for S. In late 1979, S was ported from GCOS to UNIX, which would become the new primary platform.
In 1980 the first version of S was distributed outside Bell Laboratories and in 1981 source versions were made available. In 1984 two books were published by the research team at Bell Laboratories: S: An Interactive Environment for Data Analysis and Graphics (1984 Brown Book) and Extending the S System. Also, in 1984 the source code for S became licensed through AT&T Software Sales for education and commercial purposes.
"New S"
By 1988, many changes were made to S and the syntax of the language. The New S Language (1988 Blue Book) was published to introduce the new features, such as the transition from macros to functions and how functions can be passed to other functions (such as apply). Many other changes to the S language were to extend the concept of "objects", and to make the syntax more consistent (and strict). However, many users found the transition to New S difficult, since their macros needed to be rewritten. Many other changes to S took hold, such as the use of X11 and PostScript graphics devices, rewriting many internal functions from Fortran to C, and the use of double precision (only) arithmetic. The New S language is very similar to that used in modern versions of S-PLUS and R.
In 1991, Statistical Models in S (1991 White Book) was published, which introduced the use of formula-notation (which use the ~ operator), data frame objects, and modifications to the use of object methods and classes.
S4
The latest version of the S standard is S4, released in 1998. It provides advanced object-oriented features. S4 classes differ markedly from S3 classes; S4 formally defines the representation and inheritance for each class, and has multiple dispatch: the generic function can be dispatched to a method based on the class of any number of arguments, not just one.
References
External links
Evolution of the S Language, by John M. Chambers, discusses the new features in Version 4 of S (in PostScript format)
Statistical programming languages
Programming languages created in 1976 |
3079433 | https://en.wikipedia.org/wiki/RUNT%20Linux | RUNT Linux | RUNT Linux is an acronym for ResNet USB Network Tester. It is one of many Linux distributions designed to run from a USB flash drive. RUNT is based on Slackware's bare kernel. It was originally designed as a network tool for students at North Carolina State University. It consists of a boot floppy image and a zip file, similar to zipslack. It is intended to be a fairly complete Linux installation for use as a testing tool capable of booting on any x86 computer with a USB port and a bootable floppy.
External links
Runt Linux Homepage of the distribution
Linux on a Stick! Screenshot walkthrough showing how to put Runt on a pendrive
RUNT Linux at DistroWatch
Live USB
Linux distributions |
100616 | https://en.wikipedia.org/wiki/Parallel%20port | Parallel port | In computing, a parallel port is a type of interface found on early computers (personal and otherwise) for connecting peripherals. The name refers to the way the data is sent; parallel ports send multiple bits of data at once (parallel communication), as opposed to serial communication, in which bits are sent one at a time. To do this, parallel ports require multiple data lines in their cables and port connectors and tend to be larger than contemporary serial ports, which only require one data line.
There are many types of parallel ports, but the term has become most closely associated with the printer port or Centronics port found on most personal computers from the 1970s through the 2000s. It was an industry de facto standard for many years, and was finally standardized as IEEE 1284 in the late 1990s, which defined the Enhanced Parallel Port (EPP) and Extended Capability Port (ECP) bi-directional versions. Today, the parallel port interface is virtually non-existent because of the rise of Universal Serial Bus (USB) devices, along with network printing using Ethernet and Wi-Fi connected printers.
The parallel port interface was originally known as the Parallel Printer Adapter on IBM PC-compatible computers. It was primarily designed to operate printers that used IBM's eight-bit extended ASCII character set to print text, but could also be used to adapt other peripherals. Graphical printers, along with a host of other devices, have been designed to communicate with the system.
History
Centronics
An Wang, Robert Howard and Prentice Robinson began development of a low-cost printer at Centronics, a subsidiary of Wang Laboratories that produced specialty computer terminals. The printer used the dot matrix printing principle, with a print head consisting of a vertical row of seven metal pins connected to solenoids. When power was applied to the solenoids, the pin was pushed forward to strike the paper and leave a dot. To make a complete character glyph, the print head would receive power to specified pins to create a single vertical pattern, then the print head would move to the right by a small amount, and the process repeated. On their original design, a typical glyph was printed as a matrix seven high and five wide, while the "A" models used a print head with 9 pins and formed glyphs that were 9 by 7.
This left the problem of sending the ASCII data to the printer. While a serial port does so with the minimum of pins and wires, it requires the device to buffer up the data as it arrives bit by bit and turn it back into multi-bit values. A parallel port makes this simpler; the entire ASCII value is presented on the pins in complete form. In addition to the eight data pins, the system also needed various control pins as well as electrical grounds. Wang happened to have a surplus stock of 20,000 Amphenol 36-pin micro ribbon connectors that were originally used for one of their early calculators. The interface only required 21 of these pins, the rest were grounded or not connected. The connector has become so closely associated with Centronics that it is now popularly known as the "Centronics connector".
The Centronics Model 101 printer, featuring this connector, was released in 1970. The host sent ASCII characters to the printer using seven of eight data pins, pulling them high to +5V to represent a 1. When the data was ready, the host pulled the STROBE pin low, to 0 V. The printer responded by pulling the BUSY line high, printing the character, and then returning BUSY to low again. The host could then send another character. Control characters in the data caused other actions, like the CR or EOF. The host could also have the printer automatically start a new line by pulling the AUTOFEED line high, and keeping it there. The host had to carefully watch the BUSY line to ensure it did not feed data to the printer too rapidly, especially given variable-time operations like a paper feed.
The printer side of the interface quickly became an industry de facto standard, but manufacturers used various connectors on the system side, so a variety of cables were required. For example, NCR used the 36-pin micro ribbon connector on both ends of the connection, early VAX systems used a DC-37 connector, Texas Instruments used a 25-pin card edge connector and Data General used a 50-pin micro ribbon connector. When IBM implemented the parallel interface on the IBM PC, they used the DB-25F connector at the PC-end of the interface, creating the now familiar parallel cable with a DB25M at one end and a 36-pin micro ribbon connector at the other.
In theory, the Centronics port could transfer data as rapidly as 75,000 characters per second. This was far faster than the printer, which averaged about 160 characters per second, meaning the port spent much of its time idle. The performance was defined by how rapidly the host could respond to the printer's BUSY signal asking for more data. To improve performance, printers began incorporating buffers so the host could send them data more rapidly, in bursts. This not only reduced (or eliminated) delays due to latency waiting for the next character to arrive from the host, but also freed the host to perform other operations without causing a loss of performance. Performance was further improved by using the buffer to store several lines and then printing in both directions, eliminating the delay while the print head returned to the left side of the page. Such changes more than doubled the performance of an otherwise unchanged printer, as was the case on Centronics models like the 102 and 308.
IBM
IBM released the IBM Personal Computer in 1981 and included a variant of the Centronics interface— only IBM logo printers (rebranded from Epson) could be used with the IBM PC. IBM standardized the parallel cable with a DB25F connector on the PC side and the 36-pin Centronics connector on the printer side. Vendors soon released printers compatible with both standard Centronics and the IBM implementation.
The original IBM parallel printer adapter for the IBM PC of 1981 was designed to support limited bidirectionality, with 8 lines of data output and 4 lines of data input. This allowed the port to be used for other purposes, not just output to a printer. This was accomplished by allowing the data lines to be written to by devices on either end of the cable, which required the ports on the host to be bidirectional. This feature saw little use, and was removed in later revisions of the hardware. Years later, in 1987, IBM reintroduced the bidirectional interface with its IBM PS/2 series, where it could be enabled or disabled for compatibility with applications hardwired not to expect a printer port to be bidirectional.
Bi-Tronics
As the printer market expanded, new types of printing mechanisms appeared. These often supported new features and error conditions that could not be represented on the existing port's relatively few status pins. While the IBM solution could support this, it was not trivial to implement and was not at that time being supported. This led to the Bi-Tronics system, introduced by HP on their LaserJet 4Si in April 1993. This used four existing status pins, ERROR, SELECT, PE and BUSY to represent a nibble, using two transfers to send an 8-bit value. Bi-Tronics mode, now known as nibble mode, was indicated by the host pulling the SELECT line high, and data was transferred when the host toggles the AUTOFEED low. Other changes in the handshaking protocols improved performance, reaching 400,000 cps to the printer, and about 50,000 cps back to the host. A major advantage of the Bi-Tronics system is that it can be driven entirely in software in the host, and uses otherwise unmodified hardware - all the pins used for data transfer back to the host were already printer-to-host lines.
EPP and ECP
The introduction of new devices like scanners and multi-function printers demanded much more performance than either the Bi-Tronics or IBM style backchannels could handle. Two other standards have become more popular for these purposes. The Enhanced Parallel Port (EPP), originally defined by Zenith Electronics, is similar to IBM's byte mode in concept, but changes details of the handshaking to allow up to 2 MB/s. The Extended Capability Port (ECP) is essentially an entirely new port in the same physical housing that also adds direct memory access based on ISA and run-length encoding to compress the data, which is especially useful when transferring simple images like faxes or black-and-white scanned images. ECP offers performance up to 2.5 MB/s in both directions.
All of these enhancements are collected as part of the IEEE 1284 standard. The first release in 1994 included original Centronics mode ("compatibility mode"), nibble and byte modes, as well as a change to the handshaking that was already widely used; the original Centronics implementation called for the BUSY lead to toggle with each change on any line of data (busy-by-line), whereas IEEE 1284 calls for BUSY to toggle with each received character (busy-by-character). This reduces the number of BUSY toggles and the resulting interruptions on both sides. A 1997 update standardized the printer status codes. In 2000, the EPP and ECP modes were moved into the standard, as well as several connector and cable styles, and a method for daisy chaining up to eight devices from a single port.
Some host systems or print servers may use a strobe signal with a relatively low voltage output or a fast toggle. Any of these issues might cause no or intermittent printing, missing or repeated characters or garbage printing. Some printer models may have a switch or setting to set busy by character; others may require a handshake adapter.
Dataproducts
Dataproducts introduced a very different implementation of the parallel interface for their printers. It used a DC-37 connector on the host side and a 50 pin connector on the printer side—either a DD-50 (sometimes incorrectly referred to as a "DB50") or the block shaped M-50 connector; the M-50 was also referred to as Winchester. Dataproducts parallel was available in a short-line for connections up to and a long-line version using differential signaling for connections to . The Dataproducts interface was found on many mainframe systems up through the 1990s, and many printer manufacturers offered the Dataproducts interface as an option.
A wide variety of devices were eventually designed to operate on a parallel port. Most devices were uni-directional (one-way) devices, only meant to respond to information sent from the PC. However, some devices such as Zip drives were able to operate in bi-directional mode. Printers also eventually took up the bi-directional system, allowing various status report information to be sent.
Historical uses
Before the advent of USB, the parallel interface was adapted to access a number of peripheral devices other than printers. One early use of the parallel port was for dongles used as hardware keys which were supplied with application software as a form of software copy protection. Other uses included optical disc drives such as CD readers and writers, Zip drives, scanners, external modems, gamepads, and joysticks. Some of the earliest portable MP3 players required a parallel port connection for transferring songs to the device. Adapters were available to run SCSI devices via parallel. Other devices such as EPROM programmers and hardware controllers could be connected via the parallel port.
Interfaces
Most PC-compatible systems in the 1980s and 1990s had one to three ports, with communication interfaces defined like this:
Logical parallel port 1: I/O port 0x3BC, IRQ 7 (usually in monochrome graphics adapters)
Logical parallel port 2: I/O port 0x378, IRQ 7 (dedicated IO cards or using a controller built into the mainboard)
Logical parallel port 3: I/O port 0x278, IRQ 5 (dedicated IO cards or using a controller built into the mainboard)
If no printer port is present at 0x3BC, the second port in the row (0x378) becomes logical parallel port 1 and 0x278 becomes logical parallel port 2 for the BIOS. Sometimes, printer ports are jumpered to share an interrupt despite having their own IO addresses (i.e. only one can be used interrupt-driven at a time). In some cases, the BIOS supports a fourth printer port as well, but the base address for it differs significantly between vendors. Since the reserved entry for a fourth logical printer port in the BIOS Data Area (BDA) is shared with other uses on PS/2 machines and with S3 compatible graphics cards, it typically requires special drivers in most environments.
Under DR-DOS 7.02 the BIOS port assignments can be changed and overridden using the LPT1, LPT2, LPT3 (and optionally LPT4) CONFIG.SYS directives.
Access
DOS-based systems make the logical parallel ports detected by the BIOS available under device names such as LPT1, LPT2 or LPT3 (corresponding with logical parallel port 1, 2, and 3, respectively). These names derive from terms like Line Print Terminal, Local Print Terminal, or Line Printer. A similar naming convention was used on ITS, DEC systems, as well as in CP/M and 86-DOS (LST).
In DOS, the parallel printers could be accessed directly on the command line. For example, the command "TYPE C:\AUTOEXEC.BAT > LPT1:" would redirect the contents of the AUTOEXEC.BAT file to the printer port. A PRN device was also available as an alias for LPT1. Some operating systems (like Multiuser DOS) allow to change this fixed assignment by different means. Some DOS versions use resident driver extensions provided by MODE, or users can change the mapping internally via a CONFIG.SYS PRN=n directive (as under DR-DOS 7.02 and higher). DR-DOS 7.02 also provides optional built-in support for LPT4 if the underlying BIOS supports it.
PRN, along with CON, AUX and a few others are invalid file and directory names in DOS and Windows, even in Windows XP. There is even an MS-DOS device in path name vulnerability in Windows 95 and 98, which causes the computer to crash if the user types "C:\CON\CON", "C:\PRN\PRN" or "C:\AUX\AUX" in the Windows Explorer address bar. Microsoft has released a patch to fix this bug, but newly installed Windows 95 and 98 operating systems will still have the bug.
A special "PRINT" command also existed to achieve the same effect. Microsoft Windows still refers to the ports in this manner in many cases, though this is often fairly hidden.
In SCO UNIX and Linux, the first parallel port is available via the filesystem as /dev/lp0. Linux IDE devices can use a paride (parallel port IDE) driver.
Notable consumer products
The Iomega ZIP drive
The Snappy Video SnapShot video capture device
MS-DOS 6.22's INTERLNK and INTERSRV drive sharing utility
The Covox Speech Thing audio device
The OPL2LPT and OPL3LPT audio devices
Current use
For consumers, USB and computer networks have replaced the parallel printer port, for connections both to printers and to other devices.
Many manufacturers of personal computers and laptops consider parallel to be a legacy port and no longer include the parallel interface. Smaller machines have less room for large parallel port connectors. USB-to-parallel adapters are available that can make parallel-only printers work with USB-only systems.
There are PCI (and PCI-express) cards that provide parallel ports. There are also some print servers that provide an interface to parallel ports through a network. USB-to-EPP chips can also allow other non-printer devices to continue to work on modern computers without a parallel port.
For electronics hobbyists the parallel port is still often the easiest way to connect to an external circuit board. It is faster than the other common legacy port (serial port), requires no serial-to-parallel converter, and requires far less interface logic and software than a USB target interface. However, Microsoft operating systems later than Windows 95/98 prevent user programs from directly writing to or reading from the LPT without additional software (kernel extensions).
Current CNC Milling Machines also often make use of the parallel port to directly control the machine's motors and attachments.
IBM PC implementation
Port addresses
Traditionally IBM PC systems have allocated their first three parallel ports according to the configuration in the table below (if all three printer ports exist).
If there is an unused slot, the port addresses of the others are moved up. (For example, if a port at 0x3BC does not exist, the port at 0x378 will then become the first logical parallel port.) The base address 0x3BC is typically supported by printer ports on MDA and Hercules display adapters, whereas printer ports provided by the mainboard chipset or add-on cards rarely allow to be configured to this base address. Therefore, in absence of a monochrome display adapter, a common assignment for the first logical parallel port (and therefore also for the corresponding LPT1 DOS device driver) today is 0x378, even though the default is still 0x3BC (and would be selected by the BIOS if it detects a printer port at this address). The IRQ lines are typically configurable in the hardware as well. Assigning the same interrupt to more than one printer port should be avoided and will typically cause one of the corresponding ports to work in polled mode only. The port addresses assigned to slot can be determined by reading the BIOS Data Area (BDA) at 0000h:0408h.
Bit-to-pin mapping for the Standard Parallel Port (SPP):
~ indicates a hardware inversion of the bit.
Program interface
In versions of Windows that did not use the Windows NT kernel (as well as DOS and some other operating systems), programs could access the parallel port with simple outportb() and inportb() subroutine commands. In operating systems such as Windows NT and Unix (NetBSD, FreeBSD, Solaris, 386BSD, etc.), the microprocessor is operated in a different security ring, and access to the parallel port is prohibited, unless using the required driver. This improves security and arbitration of device contention. On Linux, inb() and outb() can be used when a process is run as root and an ioperm() command is used to allow access to its base address; alternatively, ppdev allows shared access and can be used from userspace if the appropriate permissions are set.
The cross-platform library for parallel port access, libieee1284, also is available on many Linux distributions and provides an abstract interface to the parallel ports of the system. Access is handled in an open-claim-release-close sequence, which allows for concurrent access in userspace.
Pinouts
The older parallel printer ports had an 8-bit data bus and four pins for control output (Strobe, Linefeed, Initialize, and Select In), and five more for control input (ACK, Busy, Select, Error, and Paper Out). Its data transfer speed is at 150 kB/s.
The newer EPPs (Enhanced Parallel Ports) have an 8-bit data bus, and the same control pins as the normal parallel printer port. Newer ports reach speeds of up to 2 MB/s.
Pinouts for parallel port connectors are:
Inverted lines are true on logic low. If they are not inverted, then logic high is true.
Pin 25 on the DB25 connector might not be connected to ground on modern computers.
See also
Device file
Serial port
Parallel communication
Input/Output Base Address
IEEE 1284 which is sometimes called an "Enhanced Parallel Port"
Biostar, a Taiwanese computer component manufacturer partly known for having parallel port connectivity on their motherboards
Hardware IC chips:
For host computer, see Super I/O
For peripheral side, parallel port interface chips: PPC34C60 (SMSC) and W91284PIC (Warp Nine)
For USB-printer purpose, example USB chips: PL-2305 (Prolific) and CH341 (QinHeng)
References
Axelson, Jan (2000). Parallel Port Complete. Jan Axelson's Lakeview Research. .
The (Linux) Parallel Port Subsystem by Tim Waugh
External links
Parallel Port (from BeyondLogic.org) standard, enhanced (EPP), extended (ECP), examples
EPP parallel printer port data capture project
Linux I/O port programming mini-HOWTO
The Linux 2.4 Parallel Port Subsystem
Parallel Port interfacing with Windows NT/2000/XP
Parallel port complete: programming, interfacing & using the PC's parallel printer port
PyParallel - API for Python programming language
Linux ppdev reference
libieee1284 homepage
MSDN: Roadmap for Developing Parallel Device Drivers
Computer buses
Physical layer protocols
Legacy hardware
Computer connectors |
31546068 | https://en.wikipedia.org/wiki/List%20of%20RAM%20drive%20software | List of RAM drive software | RAM drive software allows part of a computer's RAM (memory) to be seen as if it were a disk drive, with volume name and, if supported by the operating system, drive letter. A RAM drive has much faster read and write access than a hard drive with rotating platters, and is volatile, being destroyed with its contents when a computer is shut down or crashes—volatility is an advantage if security requires sensitive data to not be stored permanently, and to prevent accumulation of obsolete temporary data, but disadvantageous where a drive is used for faster processing of needed data. Data can be copied between conventional mass storage and a RAM drive to preserve it on power-down and load it on start-up.
Overview
Features
Features that vary from one package to another:
Some RAM drives automatically back up contents on normal mass storage on power-down, and load them when the computer is started. If this functionality is not provided, contents can always be preserved by start-up and close-down scripts, or manually if the operator remembers to do so.
Some software allows several RAM drives to be created; other programs support only one.
Some RAM drives when used with 32-bit operating systems (particularly 32-bit Microsoft Windows) on computers with IBM PC architecture allow memory above the 4 GB point in the memory map, if present, to be used; this memory is unmanaged and not normally accessible. Software using unmanaged memory can cause stability problems.
Specifically in IBMPC based 32-bit operating systems, some RAM drives are able to use any 'unmanaged' or 'invisible' RAM below 4 GB in the memory map (known as the 3 GB barrier) i.e. RAM in the 'PCI hole'. Note: Do not assume that RAM drives supporting 'AWE' (or Address Windowing Extensions) memory above 4 GB will also support unmanaged PAE (or Physical Address Extension) memory below 4 GB—most don't.
FreeBSD
md – memory disk
This driver provides support for four kinds of memory backed virtual disks: malloc, preload, vnode, swap. Disks may be created with the next command line tools: mdconfig and mdmfs. An example of how to use these programs follows.
To create and mount memory disk with mdmfs:
# mdmfs -F newimage -s 5m md0 /mnt
To create and mount memory disk with mdconfig:
# mdconfig -a -t swap -s 5m -u 0
# newfs -U md0
# mount /dev/md0 /mnt
To destroy previously created disk:
# umount /mnt
# mdconfig -d -u 0
Linux
shm
Modern Linux systems come pre-installed with a user-accessible ramdisk mounted at /dev/shm.
RapidDisk
RapidDisk is a free and open source project containing a Linux kernel module and administration utility that functions similar to the Ramdiskadm of the Solaris (operating system). With the rxadm utility, the user is capable of dynamically attaching, removing, and resizing RAM disk volumes and treat them like any other block device.
RAMDisk
Free and open-source utility that allows using RAM as a folder.
tmpfs and ramfs
An example of how to use tmpfs and ramfs in a Linux environment is as follows:
$ mkdir /var/ramdisk
Once the mount point is identified the mount command can be used to mount a tmpfs and ramfs file system on top of that mount point:
$ mount -t tmpfs none /var/ramdisk -o size=28m
Now each time /var/ramdisk is accessed all reads and writes will be coming directly from memory.
There are 2 differences between tmpfs and ramfs.
1) the mounted space of ramfs is theorically infinite, as ramfs will grow if needed, which can easily cause system lockup or crash for using up all available memory, or start heavy swapping to free up more memory for the ramfs. For this reason limiting the size of a ramfs area can be recommendable.
2) tmpfs is backed by the computer's swap space
There are also many "wrappers" for the RAM disks for Linux as Profile-sync-daemon (psd) and many others allowing users to utilize RAM disk for desktop application speedup moving intensive IO for caches into RAM.
Microsoft Windows
Non-proprietary
ImDisk
ImDisk Virtual Disk Driver is a disk image emulator created by Olof Lagerkvist. It is free and open-source software, and is available in 32- and 64-bit variants. It is digitally signed, which makes it compatible with 64-bit versions of Microsoft Windows without having to be run in Test mode. The 64-bit version has no practical limit to the size of RAM disk that may be created.
ImDisk Toolkit is a third-party, free and open-source software that embeds the ImDisk Virtual Disk Driver and adds several features.
ERAM
ERAM is an open source driver that supports making a drive that is up to 4 GB of the total amount of RAM, uses paged/non-paged memory and supports backing up the drive to an image. It works on Windows XP/NT/2000/7/10 (32 and 64-bit). Its driver and source code can be found by going to https://github.com/Zero3K/ERAM.
Proprietary
AMD Radeon RAMDisk
AMD Radeon RAMDisk is available in free versions (RAM drive up to 4 GB, or 6 GB with AMD memory), and commercial versions for drives up to 64 GB. The free version is 'advertising supported'. Creates only a single drive (does not support multiple RAM drives). Can be backed up periodically to hard drive, and automatically loaded when the computer is started. AMD Radeon RAMDisk is a rebranded version of Dataram RAMDisk.
Dataram RAMDisk
Dataram's RAMDisk is freeware (up to 1 GB (reduced from 4 to 1GB - per October 2015 site visit) disk size) and was originally developed and marketed by John Lajoie through his private consulting company until 2001, when he sold his rights to Cenatek, before being acquired by Dataram. RAM disks larger than 4 GB require registration and a USD $18.99 single-user license. When purchasing physical RAM from Dataram, the RAMDisk license is provided free of charge. (Per DATARAM Government Sales on 4/25/2014, this is no longer the case.) Compatible with all 32-bit and 64-bit versions of Windows 10, Windows 8, Windows 7, Windows Vista, Windows XP, Windows Server 2008, and Windows Server 2003.
Dimmdrive RAMDisk
A RAMdisk built specifically for gamers which features real-time file-synchronization, Steam integration, "USB3 Turbo Mode". The interface was designed to support both technical and non-technical game enthusiasts. Cost is $29 at Dimmdrive.com and $30 on Steam. ($14.99 on Steam as of 2018)
Gavotte RamDisk
Can use Physical Address Extension to create a virtual disk in memory normally inaccessible to 32-bit versions of Microsoft Windows (both memory above the 4 GB point, and memory in the PCI hole). There is also an open source plugin that replaces the RAM drive on Bart's PE Builder with one based on Gavotte's rramdisk.sys.
Gilisoft RAMDisk
RAMDisk software for Windows 2000/2003/XP/Vista/Windows 7 (x32 & x64)/Windows 10 with simple setup, permits mounting-and-unmounting of RAMDisk images to/from drive-image-files, along with automated/convenient startup/shutdown features, $25.
Gizmo Central
Gizmo Central is a freeware program that can create and mount virtual disk files. It also has the ability to create a RAM disk up to 4GB in size as Gizmo is a 32 bit program.
Passmark OSFMount
Passmark's OSFMount supports the creation of RAM disks, and also allows you to mount local disk image files (bit-for-bit copies of a disk partition) in Windows with a drive letter. OSFMount is a free utility designed for use with PassMark OSForensics.
Primo Ramdisk
Romex Software Providing a fancy interface which is working with all windows environments from (XP to windows 10) and all windows servers editions from (2003 to 2019 currently) supports up to 128 Disks up to 32GB for Pro Version and 1TB for Ultimate and Server editions, supports to use invisible Memory in 32bit versions of windows, with saving at shut down or hibernate, Paid and trial versions available
WinRamTech (QSoft) Ramdisk Enterprise
A RAM Disk compatible with all Windows Workstation and Server OS versions (32- and 64-bit) starting from Windows 2000. The content of the RAM Disk can be made 'persisted' i.e. saved to an image file on the hard disk at regular times and/or at shutdown, and restored from the same image file at boot time. Because of the built-in disk format routines and the built-in load of the image file, this ramdisk drive is already fully accessible at the boot stage where Services and automatically started programs are launched. Certain concurrent running benchmarks of two ramdisks at the same time reveal that this ramdisk is almost the fastest. Although the development of this RAM Disk was ended on 2017, the ramdisk version 5.3.2.15 may still be purchased.
SoftPerfect RAM Disk
Available for Windows 7, 8 and 10; and Windows Server from 2008 R2 to 2019. Can access memory available to Windows, i.e. on 32-bit systems the RAM disk is limited to the same 4 GB as the 32-bit Windows itself. To use physical memory beyond 4 GB you must instal SoftPerfect RAM Disk on a 64-bit system. Multiple RAM disks can be created, and these can be made persistent by saving contents to and restoring from a disk image file.
StarWind Software Virtual RAM Drive Emulator
StarWind Software makes a freeware RAM disk software for mounting memory as actual drives within Windows. Both x86 and x64 versions exist.
Ultra RamDisk
RAMDisk software which can also mount various CD images formats, like iso, ooo, cue, ccd, nrg, mds, img. The application has two versions, paid and free where the latter allows to create a single ram disk up to 2GB in size.
VSuite Ramdisk
The Free Edition (limited to Windows 32-bit Win2000 / XP / 2003) is able to use 'invisible' RAM in the 3.25 to 4 GB 'gap' (if your motherboard has i946 or above chipset) & is also capable of 'saving to hard disk on power down' (so, in theory, allows you to use the RAM disk for Windows XP swap file and survive over a 'Hibernate'). Whilst the free edition allows multiple RAM disk drives to be set up, the total of all drives is limited to 4096 MB. The current version, VSuite Ramdisk II, has been rebranded as 'Primo Ramdisk', all versions of which are chargeable.
Microsoft source code
Ramdisk.sys sample driver for Windows 2000
Microsoft Windows offers a 'demonstration' RAM disk for Windows 2000 as part of the Windows Driver Kit. Limited to using the same physical RAM as the operating system. It is available as free download with source code.
RAMDisk sample for Windows 7/8
Microsoft provides source code for a RAM disk driver for Windows 7 and 8
Native
Windows also has a rough analog to tmpfs in the form of "temporary files". Files created with both FILE_ATTRIBUTE_TEMPORARY and FILE_FLAG_DELETE_ON_CLOSE are held in memory and only written to disk if the system experiences high memory pressure. In this way they behave like tmpfs, except the files are written to the specified path during low memory situations, rather than to swap space. This technique is often used by servers along with TransmitFile to render content to a buffer before sending to the client.
Solaris
Ramdiskadm
Ramdiskadm is a utility found in the Solaris (operating system) to dynamically add and destroy ramdisk volumes of any user defined sizes. An example of how to use ramdiskadm to add a new RAM disk in a Solaris environment is as follows:
$ ramdiskadm -a ramdisk1 100m
To destroy the RAM disk:
$ ramdiskadm -d ramdisk1
All created RAM disks can be accessed from the /dev/ramdisk directory path and treated like any other block device; that is, accessed like a physical block device, labeled with a file system and mounted, to even used in a ZFS pool.
DOS
FreeDOS includes SRDISK
MS-DOS 3.2 includes RAMDRIVE.SYS
PC DOS 3.0 includes VDISK.SYS
DR-DOS included VDISK.SYS
Multiuser DOS included an automatic RAM disk as drive M:
References
External links
12 RAM Disk Software Benchmarked for Fastest Read and Write Speed
RAM Disk technology: Performance Comparison
Are RAM Drives Faster Than SSDs? 5 Things You Must Know
RAM drive software |
38763655 | https://en.wikipedia.org/wiki/Delrina%20Corp%20v%20Triolet%20Systems%20Inc | Delrina Corp v Triolet Systems Inc | , also known as Delrina II, is a 2002 Ontario Court of Appeal case which established the existence of the merger doctrine in Canadian copyright law. The plaintiff, Delrina Corp., sued Triolet Systems Inc. and Brian Duncombe for infringing its copyright of the computer program Sysview by designing similar software, called Assess. The plaintiffs were awarded an interlocutory injunction but ultimately lost at trial. Delrina Corp.’s appeal to the Ontario Court of Appeal was dismissed.
Background
Delrina Corp. hired, Brian Duncombe, in January 1984 to improve Sysview, a computer program designed to monitor the efficiency of a Hewlett-Packard HP 3000 computer. After leaving Delrina Corp., Duncombe began working for Triolet Systems Inc. to design a functionally similar program called Assess, which would compete directly with Sysview.
Delrina Corp. brought an action for copyright infringement against Duncombe and Triolet Systems Inc., alleging that Assess was copied from Sysview. Delrina Corp. obtained an interlocutory injunction which prohibited the defendants from marketing or selling Asses, or from giving it away.
At trial, Justice O’Leary dismissed the infringement action and ordered damages to the defendants.
Delrina Corp. appeals the dismissal of the infringement action and damages order.
Ruling
Both appeals were dismissed.
Grounds of Appeal for the Infringement Action
Delrina Corp. set out four grounds of appeal regarding the dismissed infringement action: whether the trial judge erred in (1) his definition of the term "copying"; (2) excusing similarities between Assess and Sysview based on factors which are irrelevant to copyright law; (3) denying that parts of Sysview were copyrightable; and (4) drawing an adverse inference from the fact that Delrina Corp. did not produce their expert's report.
1. Trial judge’s definition of "copying"
Under this ground of appeal, Delrina Corp. argued that the trial judge erroneously limited the definition of "copying". It claimed that the trial judge's definition only included that which was copied directly from Sysview and excluded copying from memory.
At the Ontario Court of Appeal, Morden J.A. (speaking for the court) confirmed that copying from memory must be part of the definition, but nevertheless rejected this ground of appeal. Considering the trial decision as a whole, Morden J.A. held that the essential findings were not based on the erroneous definition of "copying" and therefore did not warrant overturning the decision.
2. Excusing similarities based on irrelevant factors
Delrina Corp. contended that the trial judge erred by excusing similarities between Assess and Sysview based irrelevant factors. Two such factors were that: (a) Duncombe deliberately designed Assess to be similar to Sysview; and (b) Duncombe was the author of both programs. Morden J.A. dismissed this ground of appeal, based on the following reasoning:
(a) Duncombe deliberately designed Assess to be similar to Sysview
According to Morden J.A., the trial judge did not excuse copying based on the fact that Duncombe deliberately designed Assess to be similar to Sysview. Instead, the fact that Duncombe intended to make the two programs capable of performing the same functions was used to explain the similarities between them. Furthermore, the functional similarity was not necessarily evidence of copying.
(b) Duncombe being author of both programs
Morden J.A. interpreted the trial judge to mean that some similarities between the two programs could have resulted from Duncombe's style and experience, but that does not mean these were accepted as a justification for copying. Though the similarities may be attributable to the fact that Duncombe authored both programs, this is not probative of the copying issue.
3. Error in denying copyrightability to parts of Sysview
(a) Application of United States versus Canadian authorities
Delrina Corp. argued that the trial judge erroneously denied copyright to much of Sysview because of his reliance on United States, instead of Canadian authorities. It urged that Sysview met the Canadian standard of originality under the Copyright Act. Morden J.A. confirmed that in the United States, England and Canada, copyright law only protects original expression and not ideas. The idea/expression dichotomy is a common feature of copyright law in all three countries, but is not applied in the same way in each. In the United States the scope of copyright protection is narrower because the idea/expression dichotomy is applied more rigorously. On the other hand, the scope of copyright protection is wider in England and Canada. As a consequence of a more relaxed application of the dichotomy, some ideas can be protected by copyright in English/Canadian law based on a recognition of the skill and labour required to create a work. Under this ground of appeal, Delrina Corp. specifically argued that the trial judge erred in applying the American "abstraction-filtration-comparison" method and merger doctrine to deny copyrightability in Sysview. Both of these issues relate to the idea/expression dichotomy.
(i) Abstraction-filtration-comparison method
In order to determine whether copyright was infringed in this case, the trial judge had to determine whether parts of Sysview were substantially reproduced. This required establishing whether those parts were capable of copyright protection. Delrina submitted that the trial judge erroneously applied the American abstraction-filtration-comparison method in his substantial reproduction analysis. In his reasons, the trial judge said that "[w]hether a Canadian court should adopt the abstraction-filtration-comparison method in deciding an action for copyright infringement or some other similar method,...some method must be found to weed out or remove from copyright protection those portions which,...cannot be protected by copyright". From that statement, Morden J.A. concluded that the trial judge did not necessarily use the American method and accepted that some weeding out is necessary in a substantial reproduction analysis. This ground of appeal was rejected.
(ii) Merger doctrine
The merger doctrine states that when expression and idea merge, then copyright does not subsist in the work. Delrina Corp. argued that the trial judge erred in relying on the American merger doctrine in Canada to find that certain parts of Sysview were not protected by copyright. Though the trial judge did approve of the merger doctrine, Morden J.A. held that it was appropriate for him to do so. The view of the Court of Appeal is summarized by the following passage:
"The merger notion is a natural corollary of the idea/expression distinction which...is fundamental in copyright law in Canada, England and the United States. Clearly, if there is only one or a very limited number of ways to achieve a particular result in a computer program, to hold that that way or ways are protectable by copyright could give the copyright holder a monopoly on the idea or function itself."
In this case the trial judge found that copying of Sysview was not the source of the similarities between the two programs. Many similarities were commanded by Hewlett Packard's operating system and others could be attributed to common programming practices. Though the trial judge may have alluded to some American authorities, his analysis did not deny Delrina Corp. the benefit of the broader copyright protection afforded by the Canadian system. There was no denial of copyright protection to ideas reflecting skill and labour, which would attract protection under the Canadian application of the idea/expression dichotomy. Morden J.A. consequently rejected this ground of appeal.
(b) Flawed approach
Under this third ground of appeal, Delrina Corp. also contended that the trial judge's approach to determining copyrightability was flawed, and specifically erred by separating Sysview into its parts to determine whether each was entitled to protection. It submitted that the correct approach was to first establish whether Sysview as a whole was entitled to copyright protection and then to determine if the parts alleged to be reproduced by the defendants, formed a substantial part of the whole, as per Ladbroke (Football) Ltd. v William Hill (Football) Ltd. Morden J.A. held that the trial judge at no point stated that there was no copyright in Sysview as a whole, so on appeal it was assumed that copyright did exist in the whole work. Assuming as such, the trial judge was justified in his approach of comparing the allegedly similar parts of Sysview and Assess. The trial judge found that the parts of Sysview in issue were not entitled to copyright and were not copied by the defendants.
Morden J.A. accepted the trial judge's finding that no substantial reproduction had been proven.
4. Trial judge's drawing of an adverse inference
The appellant argues that the trial judge erred in drawing an adverse inference from the fact that it did not produce its expert's report. Morden J.A. held that though an adverse inference was drawn, it was irrelevant to the trial judge's ultimate findings. The expert was asked for his opinion as to whether he thought Assess was copied from Sysview. Regardless of the expert's opinion, the trial judge made independent findings of fact regarding the issue of copying. If the adverse inference should not have been made in this case, then the error was harmless.
References
Canadian copyright case law
2002 in Canadian case law
2002 in Ontario
Court of Appeal for Ontario cases |
537442 | https://en.wikipedia.org/wiki/HDMI | HDMI | High-Definition Multimedia Interface (HDMI) is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from an HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.
HDMI implements the EIA/CEA-861 standards, which define video formats and waveforms, transport of compressed and uncompressed LPCM audio, auxiliary data, and implementations of the VESA EDID. CEA-861 signals carried by HDMI are electrically compatible with the CEA-861 signals used by the Digital Visual Interface (DVI). No signal conversion is necessary, nor is there a loss of video quality when a DVI-to-HDMI adapter is used. The Consumer Electronics Control (CEC) capability allows HDMI devices to control each other when necessary and allows the user to operate multiple devices with one handheld remote control device.
Several versions of HDMI have been developed and deployed since the initial release of the technology, but all use the same cable and connector. Other than improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC extensions.
Production of consumer HDMI products started in late 2003. In Europe, either DVI-HDCP or HDMI is included in the HD ready in-store labeling specification for TV sets for HDTV, formulated by EICTA with SES Astra in 2005. HDMI began to appear on consumer HDTVs in 2004 and camcorders and digital still cameras in 2006. , nearly 10 billion HDMI devices have been sold.
History
The HDMI founders were Hitachi, Panasonic, Philips, Silicon Image, Sony, Thomson, and Toshiba. Digital Content Protection, LLC provides HDCP (which was developed by Intel) for HDMI. HDMI has the support of motion picture producers Fox, Universal, Warner Bros. and Disney, along with system operators DirecTV, EchoStar (Dish Network) and CableLabs.
The HDMI founders began development on HDMI 1.0 on April 16, 2002, with the goal of creating an AV connector that was backward-compatible with DVI. At the time, DVI-HDCP (DVI with HDCP) and DVI-HDTV (DVI-HDCP using the CEA-861-B video standard) were being used on HDTVs. HDMI 1.0 was designed to improve on DVI-HDTV by using a smaller connector and adding audio capability and enhanced capability and consumer electronics control functions.
The first Authorized Testing Center (ATC), which tests HDMI products, was opened by Silicon Image on June 23, 2003, in California, United States. The first ATC in Japan was opened by Panasonic on May 1, 2004, in Osaka. The first ATC in Europe was opened by Philips on May 25, 2005, in Caen, France. The first ATC in China was opened by Silicon Image on November 21, 2005, in Shenzhen. The first ATC in India was opened by Philips on June 12, 2008, in Bangalore. The HDMI website contains a list of all the ATCs.
According to In-Stat, the number of HDMI devices sold was 5 million in 2004, 17.4 million in 2005, 63 million in 2006, and 143 million in 2007. HDMI has become the de facto standard for HDTVs, and according to In-Stat, around 90% of digital televisions in 2007 included HDMI. In-Stat has estimated that 229 million HDMI devices were sold in 2008. On April 8, 2008 there were over 850 consumer electronics and PC companies that had adopted the HDMI specification (HDMI adopters). On January 7, 2009, HDMI Licensing, LLC announced that HDMI had reached an installed base of over 600 million HDMI devices. In-Stat has estimated that 394 million HDMI devices would sell in 2009 and that all digital televisions by the end of 2009 would have at least one HDMI input.
On January 28, 2008, In-Stat reported that shipments of HDMI were expected to exceed those of DVI in 2008, driven primarily by the consumer electronics market.
In 2008, PC Magazine awarded a Technical Excellence Award in the Home Theater category for an "innovation that has changed the world" to the CEC portion of the HDMI specification. Ten companies were given a Technology and Engineering Emmy Award for their development of HDMI by the National Academy of Television Arts and Sciences on January 7, 2009.
On October 25, 2011, the HDMI Forum was established by the HDMI founders to create an open organization so that interested companies can participate in the development of the HDMI specification. All members of the HDMI Forum have equal voting rights, may participate in the Technical Working Group, and if elected can be on the Board of Directors. There is no limit to the number of companies allowed in the HDMI Forum though companies must pay an annual fee of US$15,000 with an additional annual fee of $5,000 for those companies who serve on the Board of Directors. The Board of Directors is made up of 11 companies who are elected every 2 years by a general vote of HDMI Forum members. All future development of the HDMI specification take place in the HDMI Forum and are built upon the HDMI 1.4b specification. Also on the same day HDMI Licensing, LLC announced that there were over 1,100 HDMI adopters and that over 2 billion HDMI-enabled products had shipped since the launch of the HDMI standard. From October 25, 2011, all development of the HDMI specification became the responsibility of the newly created HDMI Forum.
On January 8, 2013, HDMI Licensing, LLC announced that there were over 1,300 HDMI adopters and that over 3 billion HDMI devices had shipped since the launch of the HDMI standard. The day also marked the 10th anniversary of the release of the first HDMI specification.
, nearly 10 billion HDMI devices had been sold.
Specifications
The HDMI specification defines the protocols, signals, electrical interfaces and mechanical requirements of the standard. The maximum pixel clock rate for HDMI 1.0 is 165 MHz, which is sufficient to allow 1080p and WUXGA (1920×1200) at 60Hz. HDMI 1.3 increases that to 340 MHz, which allows for higher resolution (such as WQXGA, 2560×1600) across a single digital link. An HDMI connection can either be single-link (type A/C/D) or dual-link (type B) and can have a video pixel rate of 25 MHz to 340 MHz (for a single-link connection) or 25 MHz to 680 MHz (for a dual-link connection). Video formats with rates below 25 MHz (e.g., 13.5 MHz for 480i/NTSC) are transmitted using a pixel-repetition scheme.
Audio/video
HDMI uses the Consumer Electronics Association/Electronic Industries Alliance 861 standards. HDMI 1.0 to HDMI 1.2a uses the EIA/CEA-861-B video standard, HDMI 1.3 uses the CEA-861-D video standard, and HDMI 1.4 uses the CEA-861-E video standard. The CEA-861-E document defines "video formats and waveforms; colorimetry and quantization; transport of compressed and uncompressed LPCM audio; carriage of auxiliary data; and implementations of the Video Electronics Standards Association (VESA) Enhanced Extended Display Identification Data Standard (E-EDID)". On July 15, 2013, the CEA announced the publication of CEA-861-F, a standard that can be used by interfaces such as DVI, HDMI, and LVDS. CEA-861-F adds the ability to transmit several Ultra HD video formats and additional color spaces.
To ensure baseline compatibility between different HDMI sources and displays (as well as backward compatibility with the electrically compatible DVI standard) all HDMI devices must implement the sRGB color space at 8 bits per component. Ability to use the color space and higher color depths ("deep color") is optional. HDMI permits sRGB 4:4:4 chroma subsampling (8–16 bits per component), xvYCC 4:4:4 chroma subsampling (8–16 bits per component), 4:4:4 chroma subsampling (8–16 bits per component), or 4:2:2 chroma subsampling (8–12 bits per component). The color spaces that can be used by HDMI are ITU-R BT.601, ITU-R BT.709-5 and IEC 61966-2-4.
For digital audio, if an HDMI device has audio, it is required to implement the baseline format: stereo (uncompressed) PCM. Other formats are optional, with HDMI allowing up to 8 channels of uncompressed audio at sample sizes of 16 bits, 20 bits, or 24 bits, with sample rates of 32kHz, 44.1kHz, 48kHz, 88.2kHz, 96kHz, 176.4kHz, or 192kHz. HDMI also carries any IEC 61937-compliant compressed audio stream, such as Dolby Digital and DTS, and up to 8 channels of one-bit DSD audio (used on Super Audio CDs) at rates up to four times that of Super Audio CD. With version 1.3, HDMI allows lossless compressed audio streams Dolby TrueHD and DTS-HD Master Audio. As with the video, audio capability is optional. Audio return channel (ARC) is a feature introduced in the HDMI 1.4 standard. "Return" refers to the case where the audio comes from the TV and can be sent "upstream" to the AV receiver using the HDMI cable connected to the AV receiver. An example given on the HDMI website is that a TV that directly receives a terrestrial/satellite broadcast, or has a video source built in, sends the audio "upstream" to the AV receiver.
The HDMI standard was not designed to pass closed caption data (for example, subtitles) to the television for decoding. As such, any closed caption stream must be decoded and included as an image in the video stream(s) prior to transmission over an HDMI cable to appear on the DTV. This limits the caption style (even for digital captions) to only that decoded at the source prior to HDMI transmission. This also prevents closed captions when transmission over HDMI is required for upconversion. For example, a DVD player that sends an upscaled 720p/1080i format via HDMI to an HDTV has no way to pass Closed Captioning data so that the HDTV can decode it, as there is no line 21 VBI in that format.
Communication channels
HDMI has three physically separate communication channels, which are the DDC, TMDS and the optional CEC. HDMI 1.4 added ARC and HEC.
Display Data Channel (DDC)
The Display Data Channel (DDC) is a communication channel based on the I2C bus specification. HDMI specifically requires the device implement the Enhanced Display Data Channel (E-DDC), which is used by the HDMI source device to read the E-EDID data from the HDMI sink device to learn what audio/video formats it can take. HDMI requires that the E-DDC implement I2C standard mode speed (100 kbit/s) and allows it to optionally implement fast mode speed (400 kbit/s).
The DDC channel is actively used for High-bandwidth Digital Content Protection (HDCP).
Transition-minimized differential signaling (TMDS)
Transition-minimized differential signaling (TMDS) on HDMI interleaves video, audio and auxiliary data using three different packet types, called the video data period, the data island period and the control period. During the video data period, the pixels of an active video line are transmitted. During the data island period (which occurs during the horizontal and vertical blanking intervals), audio and auxiliary data are transmitted within a series of packets. The control period occurs between video and data island periods.
Both HDMI and DVI use TMDS to send 10-bit characters that are encoded using 8b/10b encoding that differs from the original IBM form for the video data period and 2b/10b encoding for the control period. HDMI adds the ability to send audio and auxiliary data using 4b/10b encoding for the data island period. Each data island period is 32 pixels in size and contains a 32-bit packet header, which includes 8 bits of BCH ECC parity data for error correction and describes the contents of the packet. Each packet contains four subpackets, and each subpacket is 64 bits in size, including 8 bits of BCH ECC parity data, allowing for each packet to carry up to 224 bits of audio data. Each data island period can contain up to 18 packets. Seven of the 15 packet types described in the HDMI 1.3a specifications deal with audio data, while the other 8 types deal with auxiliary data. Among these are the general control packet and the gamut metadata packet. The general control packet carries information on AVMUTE (which mutes the audio during changes that may cause audio noise) and color depth (which sends the bit depth of the current video stream and is required for deep color). The gamut metadata packet carries information on the color space being used for the current video stream and is required for xvYCC.
Consumer Electronics Control (CEC)
Consumer Electronics Control (CEC) is an HDMI feature designed to allow the user to command and control up to 15 CEC-enabled devices, that are connected through HDMI, by using only one of their remote controls (for example by controlling a television set, set-top box, and DVD player using only the remote control of the TV). CEC also allows for individual CEC-enabled devices to command and control each other without user intervention.
It is a one-wire bidirectional serial bus that is based on the CENELEC standard AV.link protocol to perform remote control functions. CEC wiring is mandatory, although implementation of CEC in a product is optional. It was defined in HDMI Specification 1.0 and updated in HDMI 1.2, HDMI 1.2a and HDMI 1.3a (which added timer and audio commands to the bus). USB to CEC adapters exist that allow a computer to control CEC-enabled devices.
HDMI Ethernet and Audio Return Channel
Introduced in HDMI 1.4, HDMI Ethernet and Audio Return Channel (HEAC) adds a high-speed bidirectional data communication link (HEC) and the ability to send audio data upstream to the source device (ARC). HEAC utilizes two lines from the connector: the previously unused Reserved pin (called HEAC+) and the Hot Plug Detect pin (called HEAC−). If only ARC transmission is required, a single mode signal using the HEAC+ line can be used, otherwise, HEC is transmitted as a differential signal over the pair of lines, and ARC as a common mode component of the pair.
Audio Return Channel (ARC)
ARC is an audio link meant to replace other cables between the TV and the A/V receiver or speaker system. This direction is used when the TV is the one that generates or receives the video stream instead of the other equipment. A typical case is the running of an app on a smart TV such as Netflix, but reproduction of audio is handled by the other equipment. Without ARC, the audio output from the TV must be routed by another cable, typically TOSLink or RCA, into the speaker system.
HDMI Ethernet Channel (HEC)
HDMI Ethernet Channel technology consolidates video, audio, and data streams into a single HDMI cable, and the HEC feature enables IP-based applications over HDMI and provides a bidirectional Ethernet communication at 100 Mbit/s. The physical layer of the Ethernet implementation uses a hybrid to simultaneously send and receive attenuated 100BASE-TX-type signals through a single twisted pair.
Compatibility with DVI
HDMI is backward compatible with single-link Digital Visual Interface digital video (DVI-D or DVI-I, but not DVI-A or dual-link DVI). No signal conversion is required when an adapter or asymmetric cable is used, so there is no loss of video quality.
From a user's perspective, an HDMI display can be driven by a single-link DVI-D source, since HDMI and DVI-D define an overlapping minimum set of allowed resolutions and frame-buffer formats to ensure a basic level of interoperability. In the reverse case, a DVI-D monitor has the same level of basic interoperability unless content protection with High-bandwidth Digital Content Protection (HDCP) interferes—or the HDMI color encoding is in component color space instead of RGB, which is not possible in DVI. An HDMI source, such as a Blu-ray player, may require an HDCP-compliant display, and refuse to output HDCP-protected content to a non-compliant display. A further complication is that there is a small amount of display equipment, such as some high-end home theater projectors, designed with HDMI inputs but not HDCP-compliant.
Any DVI-to-HDMI adapter can function as an HDMI-to-DVI adapter (and vice versa). Typically, the only limitation is the gender of the adapter's connectors and the gender of the cables and sockets it is used with.
Features specific to HDMI, such as remote-control and audio transport, are not available in devices that use legacy DVI-D signalling. However, many devices output HDMI over a DVI connector (e.g., ATI 3000-series and NVIDIA GTX 200-series video cards), and some multimedia displays may accept HDMI (including audio) over a DVI input. Exact capabilities beyond basic compatibility vary. Adapters are generally bi-directional.
Content protection (HDCP)
High-bandwidth Digital Content Protection (HDCP) is a newer form of digital rights management. Intel created the original technology to make sure that digital content followed the guidelines set by the Digital Content Protection group.
HDMI can use HDCP to encrypt the signal if required by the source device. CSS, CPRM and AACS require the use of HDCP on HDMI when playing back encrypted DVD Video, DVD Audio, HD DVD and Blu-ray Disc. The HDCP Repeater bit controls the authentication and switching/distribution of an HDMI signal. According to HDCP Specification 1.2 (beginning with HDMI CTS 1.3a), any system that implements HDCP must do so in a fully compliant manner. HDCP testing that was previously only a requirement for optional tests such as the "Simplay HD" testing program is now part of the requirements for HDMI compliance. HDCP accommodates up to 127 connected devices with up to 7 levels, using a combination of sources, sinks and repeaters. A simple example of this is several HDMI devices connected to an HDMI AV receiver that is connected to an HDMI display.
Devices called HDCP strippers can remove the HDCP information from the video signal so the video can play on non-HDCP-compliant displays, though a fair use and non-disclosure form must usually be signed with a registering agency before use.
Connectors
There are five HDMI connector types. Type A/B are defined in the HDMI 1.0 specification, type C is defined in the HDMI 1.3 specification, and type D/E are defined in the HDMI 1.4 specification.
Type A The plug (male) connector outside dimensions are 13.9 mm × 4.45 mm, and the receptacle (female) connector inside dimensions are 14 mm × 4.55 mm. There are 19 pins, with bandwidth to carry all SDTV, EDTV, HDTV, UHD, and 4K modes. It is electrically compatible with single-link DVI-D.
Type BThis connector is 21.2 mm × 4.45 mm and has 29 pins, carrying six differential pairs instead of three, for use with very high-resolution displays such as WQUXGA (3840×2400). It is electrically compatible with dual-link DVI-D, but has not yet been used in any products. With the introduction of HDMI 1.3, the maximum bandwidth of single-link HDMI exceeded that of dual-link DVI-D. As of HDMI 1.4, the pixel clock rate crossover frequency from single to dual-link has not been defined.
Type C This Mini connector is smaller than the type A plug, measuring 10.42 mm × 2.42 mm but has the same 19-pin configuration. It is intended for portable devices. The differences are that all positive signals of the differential pairs are swapped with their corresponding shield, the DDC/CEC Ground is assigned to pin 13 instead of pin 17, the CEC is assigned to pin 14 instead of pin 13, and the reserved pin is 17 instead of pin 14. The type C Mini connector can be connected to a type A connector using a type A-to-type C cable.
Type DThis Micro connector shrinks the connector size to something resembling a micro-USB connector, measuring only 5.83 mm × 2.20 mm For comparison, a micro-USB connector is 6.85 mm × 1.8 mm and a USB Type-A connector is 11.5 mm × 4.5 mm. It keeps the standard 19 pins of types A and C, but the pin assignment is different from both.
Type E The Automotive Connection System has a locking tab to keep the cable from vibrating loose and a shell to help prevent moisture and dirt from interfering with the signals. A relay connector is available for connecting standard consumer cables to the automotive type.
The HDMI alternate mode lets a user connect the reversible USB-C connector with the HDMI source devices (mobile, tablet, laptop). This cable connects to video display/sink devices using any of the native HDMI connectors. This is an HDMI cable, in this case a USB-C to HDMI cable.
Cables
An HDMI cable is composed of four shielded twisted pairs, with impedance of the order of 100 Ω (±15%), plus seven separate conductors. HDMI cables with Ethernet differ in that three of the separate conductors instead form an additional shielded twisted pair (with the CEC/DDC ground as a shield).
Although no maximum length for an HDMI cable is specified, signal attenuation (dependent on the cable's construction quality and conducting materials) limits usable lengths in practice and certification is difficult to achieve for lengths beyond 13 m. HDMI 1.3 defines two cable categories: Category 1-certified cables, which have been tested at 74.25 MHz (which would include resolutions such as 720p60 and 1080i60), and Category 2-certified cables, which have been tested at 340 MHz (which would include resolutions such as 1080p60 and 4K30). Category 1 HDMI cables are marketed as "Standard" and Category 2 HDMI cables as "High Speed". This labeling guideline for HDMI cables went into effect on October 17, 2008. Category 1 and 2 cables can either meet the required parameter specifications for inter-pair skew, far-end crosstalk, attenuation and differential impedance, or they can meet the required non-equalized/equalized eye diagram requirements. A cable of about can be manufactured to Category 1 specifications easily and inexpensively by using 28 AWG (0.081 mm2) conductors. With better quality construction and materials, including 24 AWG (0.205 mm2) conductors, an HDMI cable can reach lengths of up to . Many HDMI cables under 5 meters of length that were made before the HDMI 1.3 specification can work as Category 2 cables, but only Category 2-tested cables are guaranteed to work for Category 2 purposes.
As of the HDMI 1.4 specification, the following cable types are defined for HDMI in general:
Standard HDMI Cable up to 1080i and 720p
Standard HDMI Cable with Ethernet
Standard Automotive HDMI Cable
High Speed HDMI Cable 1080p, 4K 30Hz, 3D and deep color
High Speed HDMI Cable with Ethernet
A new certification program was introduced in October 2015 to certify that cables work at the 18 Gbit/s maximum bandwidth of the HDMI 2.0 specification. In addition to expanding the set of cable testing requirements, the certification program introduces an EMI test to ensure cables minimize interference with wireless signals. These cables are marked with an anti-counterfeiting authentication label and are defined as:
Premium High Speed HDMI Cable
Premium High Speed HDMI Cable with Ethernet
In conjunction with the HDMI 2.1 specification, a third category of cable was announced on January 4, 2017, called "48G". Also known as Category 3 HDMI or "Ultra High Speed" HDMI, the cable is designed to support the 48Gbit/s bandwidth of HDMI 2.1, supporting 4K, 5K, 8K and 10K at 120Hz. The cable is backwards compatible with the earlier HDMI devices, using existing HDMI type A, C and D connectors, and includes HDMI Ethernet.
Ultra High Speed HDMI Cable (48G Cable) 4K, 5K, 8K and 10K at 120Hz
Extenders
An HDMI extender is a single device (or pair of devices) powered with an external power source or with the 5V DC from the HDMI source. Long cables can cause instability of HDCP and blinking on the screen, due to the weakened DDC signal that HDCP requires. HDCP DDC signals must be multiplexed with TMDS video signals to comply with HDCP requirements for HDMI extenders based on a single Category 5/Category 6 cable. Several companies offer amplifiers, equalizers and repeaters that can string several standard HDMI cables together. Active HDMI cables use electronics within the cable to boost the signal and allow for HDMI cables of up to ; those based on HDBaseT can extend to 100 meters; HDMI extenders that are based on dual Category 5/Category 6 cable can extend HDMI to ; while HDMI extenders based on optical fiber can extend HDMI to .
Licensing
The HDMI specification is not an open standard; manufacturers need to be licensed by HDMI LA in order to implement HDMI in any product or component. Companies who are licensed by HDMI LA are known as HDMI Adopters.
DVI is the only interface that does not require a license for interfacing HDMI.
HDMI adopters
While earlier versions of HDMI specs are available to the public for download, only adopters have access to the latest standards (HDMI 1.4b/2.1). Only adopters have access to the compliance test specification (CTS) that is used for compliance and certification. Compliance testing is required before any HDMI product can be legally sold.
Adopters have IP rights under Adopter Agreement.
Adopters receive the right to use HDMI logos and TMs on their products and marketing materials.
Adopters are listed on the HDMI website.
Products from adopters are listed and marketed in the official HDMI product finder database.
Adopters receive more exposure through combined marketing, such as the annual HDMI Developers Conference and technology seminars.
HDMI fee structure
There are 2 annual fee structures associated with being an HDMI adopter:
High-volume (more than 10,000 units) HDMI Adopter Agreement US$10k/year
Low-volume (10,000 units or fewer) HDMI Adopter Agreement US$5k/year + flat US$1/unit administration fee
The annual fee is due upon the execution of the Adopter Agreement, and must be paid on the anniversary of this date each year thereafter.
The royalty fee structure is the same for all volumes. The following variable per-unit royalty is device-based and not dependent on number of ports, chips or connectors:
US$0.15 for each end-user licensed product
US$0.05 if the HDMI logo is used on the product and promotional material, the per-unit fee drops from US$0.15 to US$0.05.
US$0.04 if HDCP is implemented and HDMI logo is used, the per-unit fee drops from US$0.05 to US$0.04
Use of HDMI logo requires compliance testing. Adopters need to license HDCP separately.
The HDMI royalty is only payable on licensed products that will be sold on a stand-alone basis (i.e. that are not incorporated into another licensed product that is subject to an HDMI royalty). For example, if a cable or IC is sold to an adopter who then includes it in a television subject to a royalty, then the cable or IC maker would not pay a royalty, and the television manufacturer would pay the royalty on the final product. If the cable is sold directly to consumers, then the cable would be subject to a royalty.
Versions
HDMI devices are manufactured to adhere to various versions of the specification, in which each version is given a number or letter, such as 1.0, 1.2, or 1.4b. Each subsequent version of the specification uses the same kind of cable but increases the bandwidth or capabilities of what can be transmitted over the cable. A product listed as having an HDMI version does not necessarily mean that it has all features in that version, since some HDMI features are optional, such as deep color and xvYCC (which is branded by Sony as "x.v.Color"). Since the release of HDMI 1.4, the HDMI Licensing Administrator, Inc. (which oversees the HDMI standard) has banned the use of version numbers to identify cables. Non-cable HDMI products, starting on January 1, 2012, may no longer reference the HDMI number, and must state which features of the HDMI specification the product implements.
Version 1.0
HDMI 1.0 was released on December 9, 2002, and is a single-cable digital audio/video connector interface. The link architecture is based on DVI, using exactly the same video transmission format but sending audio and other auxiliary data during the blanking intervals of the video stream. HDMI 1.0 allows a maximum TMDS clock of 165MHz (4.95Gbit/s bandwidth per link), the same as DVI. It defines two connectors called Type A and Type B, with pinouts based on the Single-Link DVI-D and Dual-Link DVI-D connectors respectively, though the Type B connector was never used in any commercial products. HDMI 1.0 uses TMDS encoding for video transmission, giving it 3.96Gbit/s of video bandwidth ( or at 60Hz) and 8-channel LPCM/192 kHz/24-bit audio. HDMI 1.0 requires support for RGB video, with optional support for 4:4:4 and 4:2:2 (mandatory if the device has support for on other interfaces). Color depth of 10bpc (30bit/px) or 12bpc (36bit/px) is allowed when using 4:2:2 subsampling, but only 8bpc (24bit/px) color depth is permitted when using RGB or 4:4:4. Only the Rec. 601 and Rec. 709 color spaces are supported. HDMI 1.0 allows only specific pre-defined video formats, including all the formats defined in EIA/CEA-861-B and some additional formats listed in the HDMI Specification itself. All HDMI sources/sinks must also be capable of sending/receiving native Single-Link DVI video and be fully compliant with the DVI Specification.
Version 1.1
HDMI 1.1 was released on May 20, 2004, and added support for DVD-Audio.
Version 1.2
HDMI 1.2 was released on August 8, 2005, and added the option of One Bit Audio, used on Super Audio CDs, at up to 8 channels. To make HDMI more suitable for use on PC devices, version 1.2 also removed the requirement that only explicitly supported formats be used. It added the ability for manufacturers to create vendor-specific formats, allowing any arbitrary resolution and refresh rate rather than being limited to a pre-defined list of supported formats. In addition, it added explicit support for several new formats including 720p at 100 and 120 Hz and relaxed the pixel format support requirements so that sources with only native RGB output (PC sources) would not be required to support output.
HDMI 1.2a was released on December 14, 2005 and fully specifies Consumer Electronic Control (CEC) features, command sets and CEC compliance tests.
Version 1.3
HDMI 1.3 was released on June 22, 2006, and increased the maximum TMDS clock to 340 MHz (10.2 Gbit/s). Like previous versions, it uses TMDS encoding, giving it a maximum video bandwidth of 8.16 Gbit/s (1920×1080 at 120 Hz or 2560×1440 at 60 Hz). It added support for 10 bpc, 12 bpc, and 16 bpc color depth (30, 36, and 48 bit/px), called deep color. It also added support for the xvYCC color space, in addition to the Rec. 601 and Rec. 709 color spaces supported by previous versions, and added the ability to carry metadata defining color gamut boundaries. It also optionally allows output of Dolby TrueHD and DTS-HD Master Audio streams for external decoding by AV receivers. It incorporates automatic audio syncing (audio video sync) capability. It defined cable Categories 1 and 2, with Category 1 cable being tested up to 74.25 MHz and Category 2 being tested up to 340 MHz. It also added the new type C Mini connector for portable devices.
HDMI 1.3a was released on November 10, 2006, and had Cable and Sink modifications for type C, source termination recommendations, and removed undershoot and maximum rise/fall time limits. It also changed CEC capacitance limits, and CEC commands for timer control were brought back in an altered form, with audio control commands added. It also added the optional ability to stream SACD in its bitstream DST format rather than uncompressed raw DSD.
Version 1.4
HDMI 1.4 was released on June 5, 2009, and first came to market after Q2 of 2009. Retaining the bandwidth of the previous version, HDMI 1.4 added support for 40962160 at 24Hz, 38402160 at 24, 25, and 30Hz, and 19201080 at 120Hz. It also added an HDMI Ethernet Channel (HEC) that accommodates a 100 Mbit/s Ethernet connection between the two HDMI connected devices so they can share an Internet connection, introduced an audio return channel (ARC), 3D Over HDMI, a new Micro HDMI Connector, an expanded set of color spaces with the addition of sYCC601, Adobe RGB and Adobe YCC601, and an Automotive Connection System. HDMI 1.4 defined several stereoscopic 3D formats including field alternative (interlaced), frame packing (a full resolution top-bottom format), line alternative full, side-by-side half, side-by-side full, 2D + depth, and 2D + depth + graphics + graphics depth (WOWvx). HDMI 1.4 requires that 3D displays implement the frame packing 3D format at either 720p50 and 1080p24 or 720p60 and 1080p24. High Speed HDMI cables as defined in HDMI 1.3 work with all HDMI 1.4 features except for the HDMI Ethernet Channel, which requires the new High Speed HDMI Cable with Ethernet defined in HDMI 1.4.
HDMI 1.4a was released on March 4, 2010, and added two mandatory 3D formats for broadcast content, which was deferred with HDMI 1.4 pending the direction of the 3D broadcast market. HDMI 1.4a has defined mandatory 3D formats for broadcast, game, and movie content. HDMI 1.4a requires that 3D displays implement the frame packing 3D format at either 720p50 and 1080p24 or 720p60 and 1080p24, side-by-side horizontal at either 1080i50 or 1080i60, and top-and-bottom at either 720p50 and 1080p24 or 720p60 and 1080p24.
HDMI 1.4b was released on October 11, 2011, containing only minor clarifications to the 1.4a document. HDMI 1.4b is the last version of the standard that HDMI LA is responsible for. All future versions of the HDMI Specification were produced by the HDMI Forum, created on October 25, 2011.
Version 2.0
HDMI 2.0, referred to by some manufacturers as HDMI UHD, was released on September 4, 2013.
HDMI 2.0 increases the maximum bandwidth to 18.0 Gbit/s. HDMI 2.0 uses TMDS encoding for video transmission like previous versions, giving it a maximum video bandwidth of 14.4 Gbit/s. This enables HDMI 2.0 to carry 4K video at 60 Hz with 24 bit/px color depth. Other features of HDMI 2.0 include support for the Rec. 2020 color space, up to 32 audio channels, up to 1536 kHz audio sample frequency, dual video streams to multiple users on the same screen, up to four audio streams, 4:2:0 chroma subsampling, 25 fps 3D formats, support for the 21:9 aspect ratio, dynamic synchronization of video and audio streams, the HE-AAC and DRA audio standards, improved 3D capability, and additional CEC functions.
HDMI 2.0a was released on April 8, 2015, and added support for High Dynamic Range (HDR) video with static metadata.
HDMI 2.0b was released March, 2016. HDMI 2.0b initially supported the same HDR10 standard as HDMI 2.0a as specified in the CTA-861.3 specification. In December 2016 additional support for HDR Video transport was added to HDMI 2.0b in the recently released CTA-861-G specification, which extends the static metadata signaling to include Hybrid log–gamma (HLG).
Version 2.1
HDMI 2.1 was officially announced by the HDMI Forum on January4, 2017, and was released on November 28, 2017. It adds support for higher resolutions and higher refresh rates, including 4K 120Hz and 8K 120Hz. HDMI 2.1 also introduces a new HDMI cable category called Ultra High Speed (referred to as 48G during development), which certifies cables at the new higher speeds that these formats require. Ultra High Speed HDMI cables are backwards compatible with older HDMI devices, and older cables are compatible with new HDMI 2.1 devices, though the full 48Gbit/s bandwidth is only supported with the new cables.
The following features were added to the HDMI 2.1 Specification:
Maximum supported format is 10K at 120Hz
Dynamic HDR for specifying HDR metadata on a scene-by-scene or even a frame-by-frame basis
Note: While HDMI 2.1 did standardize transport of dynamic HDR metadata over HDMI, in actuality it only formalized dynamic metadata interfaces already utilized by Dolby Vision and HDR10+ in HDMI 2.0, which is why neither Dolby Vision nor HDR10+ require HDMI 2.1 to function properly.
Display Stream Compression (DSC) 1.2 is used for video formats higher than 8K with 4:2:0 chroma subsampling
High Frame Rate (HFR) for 4K, 8K, and 10K, which adds support for refresh rates up to 120Hz
Enhanced Audio Return Channel (eARC) for object-based audio formats such as Dolby Atmos and DTS:X
Enhanced refresh rate and latency reduction features:
Variable Refresh Rate (VRR) reduces or eliminates lag, stutter and frame tearing for more fluid motion in games
Quick Media Switching (QMS) for movies and video eliminates the delay that can result in blank screens before content begins to be displayed
Quick Frame Transport (QFT) reduces latency by bursting individual pictures across the HDMI link as fast as possible when the link's hardware supports more bandwidth than the minimum amount needed for the resolution and frame rate of the content. With QFT, individual pictures arrive earlier and some hardware blocks can be fully powered off for longer periods of time between pictures to reduce heat generation and extend battery life.
Auto Low Latency Mode (ALLM) When a display device supports the option to either optimize its pixel processing for best latency or best pixel processing, ALLM allows the current HDMI source device to automatically select, based on its better understanding of the nature of its own content, which mode the user would most likely prefer.
Video formats that require more bandwidth than 18.0Gbit/s (4K 60Hz 8bpc RGB), such as 4K 60Hz 10bpc (HDR), 4K 120Hz, and 8K 60Hz, may require the new "Ultra High Speed" or "Ultra High Speed with Ethernet" cables. HDMI 2.1's other new features are supported with existing HDMI cables.
The increase in maximum bandwidth is achieved by increasing both the bitrate of the data channels and the number of channels. Previous HDMI versions use three data channels (each operating at up to 6.0Gbit/s in HDMI 2.0, or up to 3.4Gbit/s in HDMI 1.4), with an additional channel for the TMDS clock signal, which runs at a fraction of the data channel speed (one tenth the speed, or up to 340MHz, for signaling rates up to 3.4Gbit/s; one fortieth the speed, or up to 150MHz, for signaling rates between 3.4 and 6.0Gbit/s). HDMI 2.1 doubles the signaling rate of the data channels to 12Gbit/s. The structure of the data has been changed to use a new packet-based format with an embedded clock signal, which allows what was formerly the TMDS clock channel to be used as a fourth data channel instead, increasing the signaling rate across that channel to 12Gbit/s as well. These changes increase the aggregate bandwidth from 18.0Gbit/s (3 × 6.0Gbit/s) to 48.0Gbit/s (4 × 12.0Gbit/s), a 2.66× improvement in bandwidth. In addition, the data is transmitted more efficiently by using a 16b/18b encoding scheme, which uses a larger percentage of the bandwidth for data rather than DC balancing compared to the TMDS scheme used by previous versions (88.% compared to 80%). This, in combination with the 2.66× bandwidth, raises the maximum data rate of HDMI 2.1 from 14.4Gbit/s to 42.Gbit/s, approximately 2.96× the data rate of HDMI 2.0.
The 48Gbit/s bandwidth provided by HDMI 2.1 is enough for 8K resolution at approximately 50Hz, with 8bpc RGB or 4:4:4 color. To achieve even higher formats, HDMI 2.1 can use Display Stream Compression with a compression ratio of up to . Using DSC, formats up to 8K () 120Hz or 10K () 100Hz at 8bpc RGB/4:4:4 are possible. Using with 4:2:2 or 4:2:0 chroma subsampling in combination with DSC can allow for even higher formats.
HDMI 2.1 includes HDR10+ as part of Vendor Specific Data Block with OUI 90-84-8b for "HDR10+ Technologies, LLC".
Version comparison
The "version" of a connection depends on the versions of the HDMI ports on the source and sink devices, not on the HDMI cable. The different categories of HDMI cable only affect the bandwidth (maximum resolution / refresh rate) of the connection. Other features such as audio, 3D, chroma subsampling, or variable refresh rate depend only on the versions of the ports, and are not affected by what type of HDMI cable is used. The only exception to this is Ethernet-over-HDMI, which requires an "HDMI with Ethernet" cable.
Products are not required to implement all features of a version to be considered compliant with that version, as most features are optional. For example, displays with HDMI 1.4 ports do not necessarily support the full 340 MHz TMDS clock allowed by HDMI 1.4; they are commonly limited to lower speeds such as 300 MHz (1080p 120 Hz) or even as low as 165 MHz (1080p 60 Hz) at the manufacturer's discretion, but are still considered HDMI 1.4-compliant. Likewise, features like 10 bpc (30 bit/px) color depth may also not be supported, even if the HDMI version allows it and the display supports it over other interfaces such as DisplayPort.
Feature support will therefore vary from device to device, even within the same HDMI version.
Main specifications
Refresh frequency limits for standard video
HDMI 1.0 and 1.1 are restricted to transmitting only certain video formats, defined in EIA/CEA-861-B and in the HDMI Specification itself. HDMI 1.2 and all later versions allow any arbitrary resolution and frame rate (within the bandwidth limit). Formats that are not supported by the HDMI Specification (i.e., no standardized timings defined) may be implemented as a vendor-specific format. Successive versions of the HDMI Specification continue to add support for additional formats (such as 4K resolutions), but the added support is to establish standardized timings to ensure interoperability between products, not to establish which formats are or aren't permitted. Video formats do not require explicit support from the HDMI Specification in order to be transmitted and displayed.
Individual products may have heavier limitations than those listed below, since HDMI devices are not required to support the maximum bandwidth of the HDMI version that they implement. Therefore, it is not guaranteed that a display will support the refresh rates listed in this table, even if the display has the required HDMI version.
Uncompressed 8bpc (24bit/px) color depth and RGB or 4:4:4 color format are assumed on this table except where noted.
Refresh frequency limits for HDR10 video
HDR10 requires 10bpc (30bit/px) color depth, which uses 25% more bandwidth than standard 8bpc video.
Uncompressed 10bpc color depth and RGB or 4:4:4 color format are assumed on this table except where noted.
Feature support
The features defined in the HDMI Specification that an HDMI device may implement are listed below. For historical interest, the version of the HDMI Specification in which the feature was first added is also listed. All features of the HDMI Specification are optional; HDMI devices may implement any combination of these features.
Although the "HDMI version numbers" are commonly misused as a way of indicating that a device supports certain features, this notation has no official meaning and is considered improper by HDMI Licensing. There is no officially-defined correlation between features supported by a device and any claimed "version numbers", as version numbers refer to historical editions of the HDMI Specification document, not to particular classes of HDMI devices. Manufacturers are forbidden from describing their devices using HDMI version numbers, and are required to identify support for features by listing explicit support for them, but the HDMI forum has received criticism for lack of enforcement of these policies.
Full HD Blu-ray Disc and HD DVD video (version 1.0)
Consumer Electronic Control (CEC) (version 1.0)
DVD-Audio (version 1.1)
Super Audio CD (DSD) (version 1.2)
Auto Lip-Sync Correction (version 1.3)
Dolby TrueHD / DTS-HD Master Audio bitstream capable (version 1.3)
Updated list of CEC commands (version 1.3a)
3D video (version 1.4)
Ethernet channel (100Mbit/s) (version 1.4)
Audio return channel (ARC) (version 1.4)
4 audio streams (version 2.0)
Dual View (version 2.0)
Perceptual quantizer HDR EOTF (SMPTE ST 2084) (version 2.0a)
Hybrid log–gamma (HLG) HDR EOTF (version 2.0a)
Static HDR metadata (SMPTE ST 2086) (version 2.0a)
Dynamic HDR metadata (SMPTE ST 2094) (version 2.0b)
Enhanced audio return channel (eARC) (version 2.1)
Variable Refresh Rate (VRR) (version 2.1)
Quick Media Switching (QMS) (version 2.1)
Quick Frame Transport (QFT) (version 2.1)
Auto Low Latency Mode (ALLM) (version 2.1)
Display Stream Compression (DSC) (version 2.1)
Source-Based Tone Mapping (SBTM) (version 2.1a)
Applications
Blu-ray Disc and HD DVD players
Blu-ray Disc and HD DVD, introduced in 2006, offer high-fidelity audio features that require HDMI for best results. HDMI 1.3 can transport Dolby Digital Plus, Dolby TrueHD, and DTS-HD Master Audio bitstreams in compressed form. This capability allows for an AV receiver with the necessary decoder to decode the compressed audio stream. The Blu-ray specification does not include video encoded with either deep color or xvYCC; thus, HDMI 1.0 can transfer Blu-ray discs at full video quality.
The HDMI 1.4 specification (released in 2009) added support for 3D video and is used by all Blu-ray 3D compatible players.
The Blu-ray Disc Association (BDA) spokespersons have stated (Sept. 2014 at IFA show in Berlin, Germany) that the Blu-ray, Ultra HD players, and 4K discs are expected to be available starting in the second half to 2015. It is anticipated that such Blu-ray UHD players will be required to include a HDMI 2.0 output that supports HDCP 2.2.
Blu-ray permits secondary audio decoding, whereby the disc content can tell the player to mix multiple audio sources together before final output. Some Blu-ray and HD DVD players can decode all of the audio codecs internally and can output LPCM audio over HDMI. Multichannel LPCM can be transported over an HDMI connection, and as long as the AV receiver implements multichannel LPCM audio over HDMI and implements HDCP, the audio reproduction is equal in resolution to HDMI 1.3 bitstream output. Some low-cost AV receivers, such as the Onkyo TX-SR506, do not allow audio processing over HDMI and are labelled as "HDMI pass through" devices. Virtually all modern AV Receivers now offer HDMI 1.4 inputs and outputs with processing for all of the audio formats offered by Blu-ray Discs and other HD video sources. During 2014 several manufacturers introduced premium AV Receivers that include one, or multiple, HDMI 2.0 inputs along with a HDMI 2.0 output(s). However, not until 2015 did most major manufacturers of AV receivers also support HDCP 2.2 as needed to support certain high quality UHD video sources, such as Blu-ray UHD players.
Digital cameras and camcorders
, most consumer camcorders, as well as many digital cameras, are equipped with a mini-HDMI connector (type C connector).
, some cameras also have 4K capability and 3D, even some cameras costing less than US$900. It needs at least a TV/monitor with HDMI 1.4a port.
Although cameras capable of HD video often include an HDMI interface for playback or even live preview, the image processor and the video processor of cameras usable for uncompressed video must be able to deliver the full image resolution at the specified frame rate in real time without any missing frames causing jitter. Therefore, usable uncompressed video out of HDMI is often called "clean HDMI".
Personal computers
Personal computer (PCs) with a DVI interface are capable of video output to an HDMI-enabled monitor. Some PCs include an HDMI interface and may also be capable of HDMI audio output, depending on specific hardware. For example, Intel's motherboard chipsets since the 945G and NVIDIA's GeForce 8200/8300 motherboard chipsets are capable of 8-channel LPCM output over HDMI. Eight-channel LPCM audio output over HDMI with a video card was first seen with the ATI Radeon HD 4850, which was released in June 2008 and is implemented by other video cards in the ATI Radeon HD 4000 series. Linux can drive 8-channel LPCM audio over HDMI if the video card has the necessary hardware and implements the Advanced Linux Sound Architecture (ALSA). The ATI Radeon HD 4000 series implements ALSA. Cyberlink announced in June 2008 that they would update their PowerDVD playback software to allow 192 kHz/24-bit Blu-ray Disc audio decoding in Q3-Q4 of 2008. Corel's WinDVD 9 Plus currently has 96 kHz/24-bit Blu-ray Disc audio decoding.
Even with an HDMI output, a computer may not be able to produce signals that implement HDCP, Microsoft's Protected Video Path, or Microsoft's Protected Audio Path. Several early graphic cards were labelled as "HDCP-enabled" but did not have the hardware needed for HDCP; this included some graphic cards based on the ATI X1600 chipset and certain models of the NVIDIA Geforce 7900 series. The first computer monitors that could process HDCP were released in 2005; by February 2006 a dozen different models had been released. The Protected Video Path was enabled in graphic cards that had HDCP capability, since it was required for output of Blu-ray Disc and HD DVD video. In comparison, the Protected Audio Path was required only if a lossless audio bitstream (such as Dolby TrueHD or DTS-HD MA) was output. Uncompressed LPCM audio, however, does not require a Protected Audio Path, and software programs such as PowerDVD and WinDVD can decode Dolby TrueHD and DTS-HD MA and output it as LPCM. A limitation is that if the computer does not implement a Protected Audio Path, the audio must be downsampled to 16-bit 48 kHz but can still output at up to 8 channels. No graphic cards were released in 2008 that implemented the Protected Audio Path.
The Asus Xonar HDAV1.3 became the first HDMI sound card that implemented the Protected Audio Path and could both bitstream and decode lossless audio (Dolby TrueHD and DTS-HD MA), although bitstreaming is only available if using the ArcSoft TotalMedia Theatre software. It has an HDMI 1.3 input/output, and Asus says that it can work with most video cards on the market.
In September 2009, AMD announced the ATI Radeon HD 5000 series video cards, which have HDMI 1.3 output (deep color, xvYCC wide gamut capability and high bit rate audio), 8-channel LPCM over HDMI, and an integrated HD audio controller with a Protected Audio Path that allows bitstream output over HDMI for AAC, Dolby AC-3, Dolby TrueHD and DTS-HD Master Audio formats. The ATI Radeon HD 5870 released in September 2009 is the first video card that allows bitstream output over HDMI for Dolby TrueHD and DTS-HD Master Audio. The AMD Radeon HD 6000 Series implements HDMI 1.4a. The AMD Radeon HD 7000 Series implements HDMI 1.4b.
In December 2010, it was announced that several computer vendors and display makers including Intel, AMD, Dell, Lenovo, Samsung, and LG would stop using LVDS (actually, FPD-Link) from 2013 and legacy DVI and VGA connectors from 2015, replacing them with DisplayPort and HDMI.
On August 27, 2012, Asus announced a new monitor that produces its native resolution of 2560×1440 via HDMI 1.4.
On September 18, 2014, Nvidia launched GeForce GTX 980 and GTX 970 (with GM204 chip) with HDMI 2.0 support. On January 22, 2015, GeForce GTX 960 (with GM206 chip) launched with HDMI 2.0 support. On March 17, 2015, GeForce GTX TITAN X (GM200) launched with HDMI 2.0 support. On June 1, 2015, GeForce GTX 980 Ti (with GM200 chip) launched with HDMI 2.0 support. On August 20, 2015, GeForce GTX 950 (with GM206 chip) launched with HDMI 2.0 support.
On May 6, 2016, Nvidia launched the GeForce GTX 1080 (GP104 GPU) with HDMI 2.0b support.
On September 1, 2020, Nvidia launched the GeForce RTX 30 series, the world's first discrete graphics cards with support for the full 48Gbit/s bandwidth with Display Stream Compression 1.2 of HDMI 2.1.
Gaming consoles
Beginning with the seventh generation of video game consoles, most consoles support HDMI. Video game consoles that support HDMI include the Xbox 360 (1.2a), Xbox One (1.4b), Xbox One S (2.0a), Xbox One X (2.0b), PlayStation 3 (1.3a), PlayStation 4 (1.4b), PlayStation 4 Pro (2.0a), Wii U (1.4a), Nintendo Switch (1.4b), Nintendo Switch (OLED model) (2.0a), Xbox Series X and Series S (2.1), and PlayStation 5 (2.1).
Tablet computers
Some tablet computers, such as the Microsoft Surface, Motorola Xoom, BlackBerry PlayBook, Vizio Vtab1008 and Acer Iconia Tab A500, implement HDMI using Micro-HDMI (Type D) ports. Others, such as the ASUS Eee Pad Transformer implement the standard using mini-HDMI (type C) ports. All iPad models have a special A/V adapter that converts Apple's data line to a standard HDMI (Type A) port. Samsung has a similar proprietary thirty-pin port for their Galaxy Tab 10.1 that can adapt to HDMI as well as USB drives. The Dell Streak 5 smartphone/tablet hybrid is capable of outputting over HDMI. While the Streak uses a PDMI port, a separate cradle adds HDMI compatibility. Most Chinese-made tablets running Android OS provide HDMI output using a mini-HDMI (type C) port. Most new laptops and desktops now have built in HDMI as well.
Mobile phones
Many mobile phones can produce an output of HDMI video via a micro-HDMI connector, SlimPort, MHL or other adapter.
Legacy compatibility
HDMI can only be used with older analog-only devices (using connections such as SCART, VGA, RCA, etc.) by means of a digital-to-analog converter or AV receiver, as the interface does not carry any analog signals (unlike DVI, where devices with DVI-I ports accept or provide either digital or analog signals). Cables are available that contain the necessary electronics, but it is important to distinguish these active converter cables from passive HDMI to VGA cables (which are typically cheaper as they don't include any electronics). The passive cables are only useful if you have a device that is generating or expecting HDMI signals on a VGA connector, or VGA signals on an HDMI connector; this is a non-standard feature, not implemented by most devices.
HDMI Alternate Mode for USB Type-C
The HDMI Alternate Mode for USB-C allows HDMI-enabled sources with a USB-C connector to directly connect to standard HDMI display devices, without requiring an adapter. The standard was released in September 2016, and supports all HDMI 1.4b features such as video resolutions up to Ultra HD 30 Hz and CEC. Previously, the similar DisplayPort Alternate Mode could be used to connect to HDMI displays from USB Type-C sources, but where in that case, active adapters were required to convert from DisplayPort to HDMI, HDMI Alternate Mode connects to the display natively.
The Alternate Mode reconfigures the four SuperSpeed differential pairs present in USB-C to carry the three HDMI TMDS channels and the clock signal. The two Sideband Use pins (SBU1 and SBU2) are used to carry the HDMI Ethernet and Audio Return Channel and the Hot Plug Detect functionality (HEAC+/Utility pin and HEAC−/HPD pin). As there are not enough reconfigurable pins remaining in USB-C to accommodate the DDC clock (SCL), DDC data (SDA), and CEC these three signals are bridged between the HDMI source and sink via the USB Power Delivery 2.0 (USB-PD) protocol, and are carried over the USB-C Configuration Channel (CC) wire. This is possible because the cable is electronically marked (i.e., it contains a USB-PD node) that serves to tunnel the DDC and CEC from the source over the Configuration Channel to the node in the cable, these USB-PD messages are received and relayed to the HDMI sink as regenerated DDC (SCL and SDA signals), or CEC signals.
Relationship with DisplayPort
The DisplayPort audio/video interface was introduced in May 2006. In recent years, DisplayPort connectors have become a common feature of premium products—displays, desktop computers, and video cards; most of the companies producing DisplayPort equipment are in the computer sector. The DisplayPort website states that DisplayPort is expected to complement HDMI, but 100% of HD and UHD TVs had HDMI connectivity. DisplayPort supported some advanced features which are useful for multimedia content creators and gamers (e.g. 5K, Adaptive-Sync), which was the reason most GPUs had DisplayPort. These features were added to the official HDMI specification slightly later, but with the introduction of HDMI 2.1, these gaps are already leveled off (with e.g. VRR / Variable Refresh Rate).
DisplayPort uses a self-clocking, micro-packet-based protocol that allows for a variable number of differential LVDS lanes as well as flexible allocation of bandwidth between audio and video, and allows encapsulating multi-channel compressed audio formats in the audio stream. DisplayPort 1.2 supports multiple audio/video streams, variable refresh rate (FreeSync), Display Stream Compression (DSC), and Dual-mode LVDS/TMDS transmitters compatible with HDMI 1.2 or 1.4. Revision 1.3 increases overall transmission bandwidth to 32.4Gbit/s with the new HBR3 mode featuring 8.1Gbit/s per lane; it requires Dual-mode with mandatory HDMI 2.0 compatibility and HDCP 2.2. Revision 1.4 adds support BT.2020 color space and HDR10 extensions from CTA-861.3, including static and dynamic metadata.
The DisplayPort connector is compatible with HDMI and can transmit single-link DVI and HDMI 1.2/1.4/2.0 signals using attached passive adapters or adapter cables. The source device includes a dual-mode transmitter that supports both LVDS signals for DisplayPort and TMDS signals for DVI/HDMI. The same external connector is used for both protocols when a DVI/HDMI passive adapter is attached, the transmitter circuit switches to TMDS mode. DisplayPort Dual-mode ports and cables/adapters are typically marked with the DisplayPort++ logo. Thunderbolt ports with mDP connector also supports Dual-mode passive HDMI adapters/cables. Conversion to dual-link DVI and component video (VGA/YPbPr) requires active powered adapters.
The USB 3.1 Type-C connector is an emerging standard that replaces legacy video connectors such as mDP, Thunderbolt, HDMI, and VGA in mobile devices. USB-C connectors can transmit DisplayPort video to docks and displays using standard USB Type-C cables or Type-C to DisplayPort cables and adapters; USB-C also supports HDMI adapters that actively convert from DisplayPort to HDMI 1.4 or 2.0. DisplayPort Alternate Mode for USB Type-C specification was published in 2015. USB Type-C chipsets are not required to include Dual-mode transmitters and only support DisplayPort LVDS protocol, so passive DP-HDMI adapters do not work with Type-C sources.
DisplayPort has a royalty rate of US$0.20 per unit (from patents licensed by MPEG LA), while HDMI has an annual fee of US$10,000 and a per unit royalty rate of between $0.04 and $0.15.
HDMI has a few advantages over DisplayPort, such as ability to carry Consumer Electronics Control (CEC) signals, and electrical compatibility with DVI (though practically limited to single-link DVI rates). Also, HDMI can sustain full bandwidth for up to 10 meters of cable length and there are certification programs to ensure this; DisplayPort cables, conversely, don't ensure full bandwidth beyond 3 meters, however some active cables extend the distance to 15 meters at certain resolutions, and specialist optical extender solutions exists to extend distances even further by sending the signal over fiber optic cable.
Relationship with MHL
Mobile High-Definition Link (MHL) is an adaptation of HDMI intended to connect mobile devices such as smartphones and tablets to high-definition televisions (HDTVs) and displays. Unlike DVI, which is compatible with HDMI using only passive cables and adapters, MHL requires that the HDMI socket be MHL-enabled, otherwise an active adapter (or dongle) is required to convert the signal to HDMI. MHL is developed by a consortium of five consumer electronics manufacturers, several of which are also behind HDMI.
MHL pares down the three TMDS channels in a standard HDMI connection to a single one running over any connector that provides at least five pins. This lets existing connectors in mobile devices such as micro-USB be used, avoiding the need for additional dedicated video output sockets. The USB port switches to MHL mode when it detects a compatible device is connected.
In addition to the features in common with HDMI (such as HDCP encrypted uncompressed high-definition video and eight-channel surround sound), MHL also adds the provision of power charging for the mobile device while in use, and also enables the TV remote to control it. Although support for these additional features requires connection to an MHL-enabled HDMI port, power charging can also be provided when using active MHL to HDMI adapters (connected to standard HDMI ports), provided there is a separate power connection to the adapter.
Like HDMI, MHL defines a USB-C Alternate Mode to support the MHL standard over USB-C connections.
Version 1.0 supported 720p/1080i 60 Hz (RGB/4:4:4 pixel encoding) with a bandwidth of 2.25 Gbit/s. Versions 1.3 and 2.0 added support for 1080p 60 Hz ( 4:2:2) with a bandwidth of 3 Gbit/s in PackedPixel mode. Version 3.0 increased the bandwidth to 6 Gbit/s to support Ultra HD (3840 × 2160) 30 Hz video, and also changed from being frame-based, like HDMI, to packet-based.
The fourth version, superMHL, increased bandwidth by operating over multiple TMDS differential pairs (up to a total of six) allowing a maximum of 36 Gbit/s. The six lanes are supported over a reversible 32-pin superMHL connector, while four lanes are supported over USB-C Alternate Mode (only a single lane is supported over micro-USB/HDMI). Display Stream Compression (DSC) is used to allow up to 8K Ultra HD (7680 × 4320) 120 Hz HDR video, and to support Ultra HD 60 Hz video over a single lane.
See also
List of display interfaces
DisplayPort
Thunderbolt (interface)
USB-C
Wireless HDMI
References
External links
HDMI Licensing, LLC.
Dolby Podcast Episode 60 – March 26, 2009 Part one of a two-part discussion with Steve Venuti, President, and Jeff Park, Technology Evangelist, of HDMI Licensing.
Dolby Podcast Episode 62 – April 23, 2009 Part two of a two-part discussion with Steve Venuti, President, and Jeff Park, Technology Evangelist, of HDMI Licensing.
Audiovisual connectors
Audiovisual introductions in 2002
Computer connectors
Computer display standards
Digital display connectors
Film and video technology
High-definition television
Japanese inventions
Television technology
Television terminology
Television transmission standards
Video signal
Serial buses |
334641 | https://en.wikipedia.org/wiki/Directory%20service | Directory service | In computing, a directory service or name service maps the names of network resources to their respective network addresses. It is a shared information infrastructure for locating, managing, administering and organizing everyday items and network resources, which can include volumes, folders, files, printers, users, groups, devices, telephone numbers and other objects. A directory service is a critical component of a network operating system. A directory server or name server is a server which provides such a service. Each resource on the network is considered an object by the directory server. Information about a particular resource is stored as a collection of attributes associated with that resource or object.
A directory service defines a namespace for the network. The namespace is used to assign a name (unique identifier) to each of the objects. Directories typically have a set of rules determining how network resources are named and identified, which usually includes a requirement that the identifiers be unique and unambiguous. When using a directory service, a user does not have to remember the physical address of a network resource; providing a name locates the resource. Some directory services include access control provisions, limiting the availability of directory information to authorized users.
Comparison with relational databases
Several things distinguish a directory service from a relational database. Data can be redundant if it aids performance.
Directory schemas are object classes, attributes, name bindings and knowledge (namespaces) where an object class has:
Must - attributes that each instances must have
May - attributes which can be defined for an instance but can be omitted, with the absence similar to NULL in a relational database
Attributes are sometimes multi-valued, allowing multiple naming attributes at one level (such as machine type and serial number concatenation, or multiple phone numbers for "work phone"). Attributes and object classes are usually standardized throughout the industry; for example, X.500 attributes and classes are often formally registered with the IANA for their object ID. Therefore, directory applications try to reuse standard classes and attributes to maximize the benefit of existing directory-server software.
Object instances are slotted into namespaces; each object class inherits from its parent object class (and ultimately from the root of the hierarchy), adding attributes to the must-may list. Directory services are often central to the security design of an IT system and have a correspondingly-fine granularity of access control.
Replication and distribution
Replication and distribution have distinct meanings in the design and management of a directory service. Replication is used to indicate that the same directory namespace (the same objects) are copied to another directory server for redundancy and throughput reasons; the replicated namespace is governed by the same authority. Distribution is used to indicate that multiple directory servers in different namespaces are interconnected to form a distributed directory service; each namespace can be governed by a different authority.
Implementations
Directory services were part of an Open Systems Interconnection (OSI) initiative for common network standards and multi-vendor interoperability. During the 1980s, the ITU and ISO created a set of standards for directory services, initially to support the requirements of inter-carrier electronic messaging and network-name lookup. The Lightweight Directory Access Protocol (LDAP) is based on the X.500 directory-information services, using the TCP/IP stack and an X.500 Directory Access Protocol (DAP) string-encoding scheme on the Internet.
Systems developed before the X.500 include:
Domain Name System (DNS): The first directory service on the Internet, still in use
Hesiod: Based on DNS and used at MIT's Project Athena
Network Information Service (NIS): Originally Yellow Pages (YP) Sun Microsystems' implementation of a directory service for Unix network environments. It played a role similar to Hesiod.
NetInfo: Developed by NeXT during the late 1980s for NEXTSTEP. After its acquisition by Apple, it was released as open source and was the directory service for Mac OS X before it was deprecated for the LDAP-based Open Directory. Support for NetInfo was removed with the release of 10.5 Leopard.
Banyan VINES: First scalable directory service
NT Domains: Developed by Microsoft to provide directory services for Windows machines before the release of the LDAP-based Active Directory in Windows 2000. Windows Vista continues to support NT Domains after relaxing its minimum authentication protocols.
LDAP implementations
LDAP/X.500-based implementations include:
389 Directory Server: Free Open Source server implementation by Red Hat, with commercial support by Red Hat and SUSE.
Active Directory: Microsoft's directory service for Windows, originating from the X.500 directory, created for use in Exchange Server, first shipped with Windows 2000 Server and supported by successive versions of Windows
Apache Directory Server: Directory service, written in Java, supporting LDAP, Kerberos 5 and the Change Password Protocol; LDAPv3 certified
Apple Open Directory: Apple's directory server for Mac OS X, available through Mac OS X Server
eDirectory: NetIQ's implementation of directory services supports multiple architectures, including Windows, NetWare, Linux and several flavours of Unix and is used for user administration and configuration and software management; previously known as Novell Directory Services.
Red Hat Directory Server: Red Hat released Red Hat Directory Server, acquired from AOL's Netscape Security Solutions unit, as a commercial product running on top of Red Hat Enterprise Linux as the community-supported 389 Directory Server project. Upstream open source project is called FreeIPA.
Oracle Internet Directory: (OID) is Oracle Corporation's directory service, compatible with LDAP version 3.
Sun Java System Directory Server: Sun Microsystems' directory service
OpenDS: Open-source directory service in Java, backed by Sun Microsystems
Oracle Unified Directory: (OUD) is Oracle Corporation's next-generation unified directory solution. It integrates storage, synchronization, and proxy functionalities.
IBM Tivoli Directory Server: Custom build of an old OpenLDAP release
Windows NT Directory Services (NTDS), later renamed Active Directory, replaced the former NT Domain system.
Critical Path Directory Server
OpenLDAP: Derived from the original University of Michigan LDAP implementation (like Netscape, Red Hat, Fedora and Sun JSDS implementations), it supports all computer architectures (including Unix and Unix derivatives, Linux, Windows, z/OS and a number of embedded-realtime systems).
Lotus Domino
Nexor Directory
OpenDJ - a Java-based LDAP server and directory client that runs in any operating environment, under license CDDL. Developed by ForgeRock, until 2016, now maintained by OpenDJ Community
Open-source tools to create directory services include OpenLDAP, the Kerberos protocol and Samba software, which can function as a Windows domain controller with Kerberos and LDAP back ends. Administration is by GOsa or Samba SWAT.
Using name services
Unix systems
Name services on Unix systems are typically configured through nsswitch.conf. Information from name services can be retrieved with getent.
See also
Access control list
Directory Services Markup Language
Hierarchical database model
LDAP Data Interchange Format
Metadirectory
Service delivery platform
Virtual directory
References
Citations
Sources
Computer access control
Computer access control protocols |
2570211 | https://en.wikipedia.org/wiki/Titan%20Rain | Titan Rain | Titan Rain was a series of coordinated attacks on computer systems in the United States since 2003; they were known to have been ongoing for at least three years. The attacks originated in Guangdong, China. The activity is believed to be associated with a state-sponsored advanced persistent threat. It was given the designation Titan Rain by the federal government of the United States.
Titan Rain hackers gained access to many United States defense contractor computer networks, which were targeted for their sensitive information, including those at Lockheed Martin, Sandia National Laboratories, Redstone Arsenal, and NASA.
Attackers
The attacks are believed to be the result of actions by People's Liberation Army Unit 61398. These hackers attacked both the US government (Defense Intelligence Agency) and the UK government (Ministry of Defence). In 2006, an "organised Chinese hacking group" shut down a part of the UK House of Commons computer system. The Chinese government has denied responsibility.
Consequences
The U.S. government has blamed the Chinese government for the 2004 attacks. Alan Paller, SANS Institute research director, stated that the attacks came from individuals with "intense discipline" and that "no other organization could do this if they were not a military". Such sophistication has pointed toward the People's Liberation Army as the attackers.
Titan Rain reportedly attacked multiple organizations, such as NASA and the FBI. Although no classified information was reported stolen, the hackers were able to steal unclassified information (e.g., information from a home computer) that could reveal strengths and weaknesses of the United States.
Titan Rain has also caused distrust between other countries (such as the United Kingdom and Russia) and China. The United Kingdom has stated officially that Chinese hackers attacked its governmental offices. Titan Rain has caused the rest of the world to be more cautious of attacks not just from China but from other countries as well.
See also
Advanced persistent threat
Computer network operations
Cyberwarfare
Moonlight Maze
Operation Aurora
Shawn Carpenter
Stakkato
References
External links
Hacker groups
Espionage scandals and incidents
Military intelligence
National security
Information sensitivity
Data security
Information operations and warfare
21st-century conflicts
Electronic warfare
Cyberattacks
Cyberwarfare in China
Hacking in the 2000s
Chinese advanced persistent threat groups |
60709308 | https://en.wikipedia.org/wiki/John%20Wick%20Hex | John Wick Hex | John Wick Hex is a neo-noir action strategy video game based on the John Wick franchise. It was released on 8 October 2019 for Microsoft Windows and macOS. The PlayStation 4 port of the game released on 5 May 2020. The Xbox One and Nintendo Switch ports of the game were released on 4 December 2020. The game was developed by British studio Bithell Games and is distributed by Good Shepherd Entertainment. The game serves as a narrative prequel to the film series.
Gameplay
John Wick Hex is a timeline strategy game with elements of resource management where the player maneuvers the titular character through a level on a hex-based grid, using various moves and actions to defeat enemies and avoid being hit by his foes.
To the player, the actions of John Wick and visible enemies are shown on a timeline similar to those used in video editing software, with actions taking different amounts of time. The player loads commands for John Wick to follow into this timeline, and the game will pause if the player has not added further actions into the timeline. These actions may be about gaining a better position to fire from, taking a better stance that improves John Wick's chances to hit, or to take defensive actions and improved cover from foes.
Players must also take into consideration the limited ammo that John Wick has i.e. 15 bullets in his handgun, because once the player reloads their weapon then the player loses any bullets left in the weapon's magazine. Although John Wick is able to grab weapons from enemies, this takes precious time, requiring the player to think through all the possible actions they can take before getting shot by the enemies. An interesting fact is that every time a player chooses to shoot, John Wick fires two rounds at his enemy because John Wick double-taps his enemies in the movies.
Initially the level is covered by a type of fog of war which becomes unveiled as the player moves John Wick further in the level. The game has been described as a combination of tactics games such as X-COM mixed with fast-paced decision making of Superhot. At the completion of a level, the game can play back a replay of the movements, without waiting for any user input, mimicking some of the fluidity of the action scenes from the John Wick films.
Plot
Hex, an international criminal mastermind, abducts Winston and Charon of the New York Continental as an act of rebellion against the High Table and stows them away at an undisclosed location. In response, a contract is put out by the High Table to retrieve them and John Wick is dispatched to ensure that fealty is sworn. The plot follows Wick in his prime as he sets out to dismantle Hex's network of underbosses and enforcers across New York and Switzerland on his way to end Hex's reign and reassert the High Table's dominance.
Development
The game John Wick Hex originated after a casual discussion between game developer Mike Bithell and his friend Ben Andac, who works with Good Shepherd Entertainment. Both had just seen an action film and the conversation turned to how one would make a video game to capture the action and fight scenes from the John Wick films. Bithell suggested the idea of a turn-based tactical game, what he called "John Wick chess", in which the player would have to make new decisions as the situation changes.
Andac was later part of discussions between Lionsgate Entertainment and Good Shepherd Entertainment; the film production company was looking for novel concepts for video games that could be used to expand upon the John Wick film series. According to Bithell, Lionsgate wanted more than just an unoriginal game as a cash grab, but something that distanced itself from what one would have expected for a John Wick-oriented game, in the same manner that Lionsgate had seen the first film as a novel twist on a typical action movie. Andac recalled Bithell's concept as well as his unique style of game production, and put Bithell's idea out there.
This subsequently led to discussions between Bithell, Lionsgate, and Good Shepherd to give Bithell the green light to start development of Hex in mid-2018. Bithell stated that he felt responsible with the licensed property, that "If you’re making a licensed game and you’re not trying to push the medium forward in my opinion you’re wasting your time."
In addition to reviewing the first two films, Bithell was given access to the script for John Wick: Chapter 3 – Parabellum prior to its production. Bithell had the opportunity to discuss the films' direction with Chad Stahelski, the series' director, during post-production editing, and also was able to work with the various stuntmen on the film to develop the animations to be used in the game. Bithell credited Stahelski with the fog of war concept to help create tensions with unknowns on each level.
Bithell wanted to capture the idea that combat in the John Wick universe is more like a conversation with back and forth responses, a factor that the stunt coordinators for the films have kept in mind. One key movie scene that inspired this approach came from the first Wick film, where Wick has run out of bullets, so he takes a moment to stun his enemy to give him time to reload his weapon and fire again before the enemy recovers; Bithell wanted the gameplay to be able to capture that type of flow. Bithell credits the film's stunt coordinator JoJo Eusebio in offering some more potential moves for John Wick that are not shown in the film but make for good strategic elements in a video game, such as John Wick using enemies as moving cover; by grabbing and then pushing a foe a short distance.
Originally, Bithell built the game out as the turn-based tactical game he had originally described, so that the player would direct John Wick during his turn, then wait for the other opponents to complete their moves. When showing this version of the game to Lionsgate executives, they were concerned that this made it appear that John Wick just stood there, taking the shots fired by his opponents. Bithell took this advice to heart and reworked the core game to use the timeline approach instead of the turn-based approach, so that John Wick's movements and enemy actions were happening simultaneously. Bithell regularly traveled to Los Angeles to keep Lionsgate abreast of the game's status and for additional feedback.
Actors Ian McShane and Lance Reddick reprise their roles from the films, while Troy Baker voices the character of Hex.
The game was first revealed in May 2019, just prior to the release of John Wick 3, with further details revealed at E3 2019. The game was released on 8 October 2019 for Microsoft Windows and MacOS. A PlayStation 4 port of the game was released on 5 May 2020, and for Nintendo Switch and Xbox One on 4 December 2020.
Reception
John Wick Hex received average reviews. Aggregating review website Metacritic gave the game 74/100 based on 53 reviews.
Accolades
References
External links
John Wick
2019 video games
MacOS games
Action video games
Strategy video games
Video games based on films
Video games developed in the United Kingdom
Video games scored by Austin Wintory
Windows games
PlayStation 4 games
Nintendo Switch games
Xbox One games
Single-player video games |
481814 | https://en.wikipedia.org/wiki/Leased%20line | Leased line | A leased line is a private telecommunications circuit between two or more locations provided according to a commercial contract. It is sometimes also known as a private circuit, and as a data line in the UK. Typically, leased lines are used by businesses to connect geographically distant offices.
Unlike traditional telephone lines in the public switched telephone network (PSTN) leased lines are generally not switched circuits, and therefore do not have an associated telephone number. Each side of the line is permanently connected, always active and dedicated to the other. Leased lines can be used for telephone, Internet, or other data communication services. Some are ringdown services, and some connect to a private branch exchange (PBX) or network router.
The primary factors affecting the recurring lease fees are the distance between end stations and the bandwidth of the circuit. Since the connection does not carry third-party communications, the carrier can assure a specified level of quality.
An Internet leased line is a premium Internet connectivity product, normally delivered over fiber, which provides uncontended, symmetrical bandwidth with full-duplex traffic. It is also known as an Ethernet leased line, dedicated line, data circuit or private line.
History
Leased line services (or private line services) became digital in the 1970s with the conversion of the Bell backbone network from analog to digital circuits. This allowed AT&T to offer Dataphone Digital Services (later re-branded digital data services) that started the deployment of ISDN and T1 lines to customer premises to connect.
Leased lines were used to connect mainframe computers with terminals and remote sites, via IBM's Systems Network Architecture (created in 1974) or DEC's DECnet (created in 1975).
With the extension of digital services in the 1980s, leased lines were used to connect customer premises to frame relay or ATM networks. Access data rates increased from the original T1 option with maximum transmission speed of 1.544 Mbit/s up to T3 circuits.
In the 1990s, with the advances of the Internet, leased lines were also used to connect customer premises to ISP point of presence whilst the following decade saw a convergence of the aforementioned services (frame relay, ATM, Internet for businesses) with the MPLS integrated offerings.
Access data rates also evolved dramatically to speeds of up to 10Gbit/s in the early 21st century with the Internet boom and increased offering in long-haul optical networks or metropolitan area networks.
Applications
Leased lines are used to build up private networks, private telephone networks (by interconnecting PBXs) or access the internet or a partner network (extranet).
Here is a review of the leased-line applications in network designs over time:
Site to site data connectivity
Terminating a leased line with two routers can extend network capabilities across sites. Leased lines were first used in the 1970s by enterprise with proprietary protocols such as IBM System Network Architecture and Digital Equipment DECnet, and with TCP/IP in University and Research networks before the Internet became widely available. Note that other Layer 3 protocols were used such as Novell IPX on enterprise networks until TCP/IP became ubiquitous in the 2000s. Today, point to point data circuits are typically provisioned as either TDM, Ethernet, or Layer 3 MPLS.
Site to site PBX connectivity
Terminating a leased line with two PBX allowed customers to by-pass PSTN for inter-site telephony. This allowed the customers to manage their own dial plan (and to use short extensions for internal telephone number) as well as to make significant savings if enough voice traffic was carried across the line (especially when the savings on the telephone bill exceeded the fixed cost of the leased line).
Site to network connectivity
As demand grew on data network telcos started to build more advanced networks using packet switching on top of their infrastructure. Thus, a number of telecommunication companies added ATM, Frame-relay or ISDN offerings to their services portfolio. Leased lines were used to connect the customer site to the telco network access point.
International private leased circuit
An international private leased circuit (IPLC) functions as a point-to-point private line.
IPLCs are usually time-division multiplexing (TDM) circuits that utilize the same circuit amongst many customers. The nature of TDM requires the use of a CSU/DSU and a router. Usually the router will include the CSU/DSU.
Then came the Internet (in the mid-1990s) and since then the most common application for leased line is to connect a customer to its ISP point of presence. With the changes that the Internet brought in the networking world other technologies were developed to propose alternatives to frame-relay or ATM networks such as VPNs (hardware and software) and MPLS networks (that are in effect an upgrade to TCP/IP of existing ATM/frame-relay infrastructures).
Availability
In the United Kingdom
In the UK, leased lines are available at speeds from 64 kbit/s increasing in 64 kbit/s increments to 2.048 Mbit/s over a channelised E1 tail circuit and at speeds between 2.048 Mbit/s to 34.368 Mbit/s via channelised E3 tail circuits. The NTE will terminate the circuit and provide the requested presentation most frequently X.21 however higher speed interfaces are available such as G.703 or 10baseT. Some ISPs however use the term more loosely, defining a leased line as “any dedicated bandwidth service delivered over a leased fibre connection".
As of March 2018, Leased Line services are most commonly available in the region of 100 Mbit/s to 1 Gbit/s. In large cities, for example, London, speeds of 10 Gbit/s are attainable.
In the United States
In the U.S., low-speed leased lines (56 kbit/s and below) are usually provided using analog modems. Higher-speed leased lines are usually presented using FT1 (Fractional T1): a T1 bearer circuit with 1 to 24, 56k or 64k timeslots. Customers typically manage their own network termination equipment, which include a Channel Service Unit and Data Service Unit (CSU/DSU).
In Hong Kong
In Hong Kong, leased lines are usually available at speeds of 64k, 128k, 256k, 512k, T1 (channelized or not) or E1 (less common). Whatever the speed, telcos usually provide the CSU/DSU and present to the customer on V.35 interface.
Fibre circuits are slowly replacing the traditional circuits and are available at nearly any bandwidth.
In India
In India, leased lines are available at speeds of 64 kbit/s, 128 kbit/s, 256 kbit/s, 512 kbit/s, 1 Mbit/s, 2 Mbit/s, 4 Mbit/s, 8 Mbit/s, 1000 Mbit/s T1(1.544 Mbit/s) or E1(2.048 Mbit/s) and up to 622 Mbit/s. Customers are connected either through OFC, telephone lines, ADSL, or through Wi-Fi. Customers would have to manage their own network termination equipment, namely the channel service unit and data service unit.
In Italy
In Italy, leased lines are available at speeds of 64 kbit/s (terminated by DCE2 or DCE2plus modem) or multiple of 64 kbit/s from 128 kbit/s up to framed or unframed E1 (DCE3 modem) in digital form (PDH service, known as CDN, Circuito Diretto Numerico). Local telephone companies also may provide CDA (Circuito Diretto Analogico), that are plain copper dry pair between two buildings, without any line termination: in the past (pre-2002) a full analog base band was provided, giving an option to customer to deploy xDSL technology between sites: nowadays everything is limited at 4 kHz of bearer channel, so the service is just a POTS connection without any setup channel.
For many purposes, leased lines are gradually being replaced by DSL and metro Ethernet.
Leased line alternatives
Leased lines are more expensive than alternative connectivity services including (ADSL, SDSL, etc.) because they are reserved exclusively to the leaseholder. Some internet service providers have therefore developed alternative products that aim to deliver leased-line type services (carrier Ethernet-based, zero contention, guaranteed availability), with more moderate bandwidth, over the standard UK national broadband network. While a leased line is full-duplex, most leased line alternatives provide only half-duplex or in many cases asymmetrical service.
See also
Circuit ID
Dark fibre
Dry loop
Tie line
References
Communication circuits
Local loop |
26802068 | https://en.wikipedia.org/wiki/DECwriter | DECwriter | The DECwriter series was a family of computer terminals from Digital Equipment Corporation (DEC). They were typically used in a fashion similar to a teletype, with a computer output being printed to paper and the user inputting information on the keyboard. In contrast to teletypes, the DECwriters were based on dot matrix printer technology, one of the first examples of such a system to be introduced. Versions lacking a keyboard were also available for use as computer printers, which eventually became the only models as smart terminals became the main way to interact with mainframes and minicomputers in the 1980s.
There were four series of machines, starting with the original DECwriter in 1970, the DECwriter II in 1974, DECwriter III in 1978, and the final DECwriter IV in 1982. The first three were physically similar, large machines mounted on a stand normally positioned above a box of fanfold paper. They differed primarily in speed and the selection of computer interfaces. The IV was significantly different, intended for desktop use and looking more like an IBM Selectric typewriter than a traditional printer. Most models were available without a keyboard for print-only usage, in which case they were later known as DECprinters.
The DECwriter's were among DEC's best-selling products, notably the II and III series.
DECwriter
The original DECwriter was introduced in November 1970 at the Fall Joint Computer Conference. Also known by its model number, LA30, it was one of the earliest dot matrix printers to be introduced to market, only months after the seminal Centronics 101 that May at the Spring Joint Computer Conference. At the time, most small computer systems were accessed using surplus teletype units, and the LA30 was intended to be used in the same general fashion. As such, its only computer interface was a 30 mA current loop, as used on teletype machines, with the explicit goal of "having been designed to replace the standard Teletype Model 33, 35 and 37 KSR."
The LA30 used a 64-character ASCII-based character set, lacking lower-case characters and printing them in upper-case. It used a 7-pin print head with glyphs in a 5x7 grid. It normally printed 80-column lines on standard inch wide tractor feed paper. It could print up to 30 characters per second (cps), matching the maximum interface speed of 300 bps (30 cps assuming one start and one stop bit). The interface could also run at 110 and 150 bps. However, carriage returns required of a second, during which time the host computer had to send data it would know would not be printed, the so-called "fill characters" that were common on printers of the era.
Mechanically, the machine was and came mounted on robust legs that raised the keyboard to standard desk height with the top from the ground. Normally, a box of fanfold paper would be placed below the printer mechanism and feed upward though a slot in the bottom of the desk. The case around the keyboard was curved, somewhat similar to the ADM-3A. The entire front cover lifted upward to provide access to the printing mechanism, both for basic maintenance and for feeding in new paper. DEC suggested leaving behind the system to provide enough room to swing it fully open.
In June 1972, DEC introduced two new versions of the DECwriter, the LA30A which lacked a keyboard and was used as a dedicated printer, and the LA30-E which added an RS-232 interface option, the "E" standing for the new name for the port, EIA-232. A later addition was the LA30-P, the "P" referring to the addition of a parallel Centronics port, which had become an almost universal standard by the mid-1970s.
DECwriter II
A replacement for original line was announced in August 1974 with the introduction of the DECwriter II line and its first model, the LA36. The LA36 used the same basic printing mechanism as the LA30, and was physically similar although smaller and more rectangular. Like the LA30, the LA36 was also offered in a keyboard-less printer-only model, in this case known as the LA35.
The primary change was the addition of a data buffer that allowed it to store characters. This meant the terminal could continue accepting data from the host computer while the machine was performing carriage returns or other time-consuming operations, and then pick up printing once that was complete. When that occurred, it began printing characters as fast as possible until the buffer was empty again, at speeds as high as 60 cps. This had the added advantage that the host computer did not have to insert fill characters, which in turn led to simpler interfacing requirements and reduced device driver complexity.
There were numerous other changes as well. The character set now included a complete 128-character ASCII set, including upper and lower case as well as various control characters. The character set was stored in read only memory (ROM), and optional ROMs for Katakana and the APL symbols were also available. The 63-key keyboard followed the ANSI X4.14-1971 typewriter layout, and included a further 19 keys for numeric input and various controls. The tractor feed was much larger, with a fixed section on the left and a movable one on the right, allowing it to feed paper from wide and print up to 132 columns. The print head had enough force to print through six pieces of paper, allowing it to print using carbon paper or copy paper forms.
The systems were so popular that several 3rd party companies introduced add-on cards to give the systems more functionality. The Intertec Superdec offered 1200 bps support, double-wide characters, APL characters and even user-defined character sets. The Datasouth DS120 was similar, lacking the character sets but adding bidirectional printing. The Selanar Graphics II add-on offered bitmapped graphics support as well as increased speeds to 9,600 bps.
DECprinter I, new DECwriter IIs
The DECprinter I, model LA180, was introduced in September 1976. This was essentially a simplified version of the LA35, offered as a printer only using the Centronics port to provide up to 180 cps. In November, the same basic mechanism was used as the basis for new versions of the LA35 and LA36, differing primarily using serial ports which made the easier to connect to DEC systems. These went on to become one of DEC's best-selling products.
DECwriter III
January 1977 saw the introduction of the DECwriter III, or LS120. This was a cost-reduced version of the LA36 that supported only serial input out of the box, lacking the former current loop interface. Three new versions based on the LS120 were introduced in November 1978, the print-only LA120-RA DECprinter III, the LA120-DA which replaced the LA36 terminal, and the LA120-RB, otherwise similar to the RA but able to print on up to nine-thick copy paper as opposed to the normal six-thick of the base model.
The LA120's were similar to the earlier models mechanically, with only minor changes to the layout of the printer and the floor stand. Internally, the primary change was the addition of a 1 kB buffer, which allowed it to store many lines of text. The printer examined the data, skipping over blank areas at high speed, and printing in both directions by reading backward through the buffer where appropriate. The overall speed increased to 180 cps. In addition to the character sets of the II series, the III added new character sets with National Replacement Character Sets for Finland, Denmark, Sweden, Germany, Norway, and France. It also offered eight options for character width (narrow or wide) and double-strike for bold.
The LA120s normally used only a RS-232 interface, but the LA12X-AL add-on kit provided a current loop for those who needed it, while the LA12X-BB added parallel, and LA12X-CB for Unibus. The LA12X-DL expanded the buffer to 4 kB.
DECwriter IV
The first complete redesign of the DECwriter line was introduced in June 1982 with the DECwriter IV. In contrast to the earlier models, which were large stand-alone units on their own floor-standing mounts, the IV series were small desk-top systems that looked like contemporary electric typewriters, notably the IBM Selectric. They were slow, at 30 cps, and were not intended as outright replacements for the III series, which were more suited to computer-room operation.
Two models were offered, the LA34s which used a typewriter-like roller feed mechanism, and the LA38s which added a tractor feed mechanism, which could also be purchased separately for the LA34. Both fed paper in from the top, like a typewriter, and did not need any room below them for paper feeding.
The DECwriter IV series also introduced optional support for DEC's sixel graphics format, allowing it to produce black and white graphics output. This worked by sending characters using only 6 of the 8 bits of the printable character set and using those six bits to directly control six of the seven pins in the print head. This way, graphics data could be sent over 7-bit links. The printer could expand the data horizontally to several different characters-per-inch settings.
Letterwriter 100
Otherwise similar to the IV series, the LA100 series used a nine-pin print head and offered three different printing speeds to provide what DEC referred to as draft, memo or letter qualities. In draft mode it printed at 240 cps, while in letter quality it used a 33 by 18 dot matrix that reduced the printing rate to 30 cps. As before, the LA100 was offered as the print-only Letterprinter 100 or a variety of Letterwriter 100 terminals.
The internal character set ROM was further expanded to support British, Finnish, French, Canadian French, German, Italian, Norwegian/Danish, Spanish and Swedish. More interesting was the addition of plug-in ROM cartridges containing the actual glyph data for the characters. The system could support two plug-in cartridges and as many as three internal ROMs (bare chips) to allow five character sets at a time.
DECwriter Correspondent
The LA12 DECwriter Correspondent was a small form-factor 20 pound terminal for portable use. Various models offered built-in modems or other interfaces. The system was otherwise similar to the IV series in features.
Notes
References
Citations
Bibliography
DEC hardware
Computer terminals |
5416311 | https://en.wikipedia.org/wiki/List%20of%20macOS%20components | List of macOS components | This is a list of macOS components—features that are included in the current Mac operating system.
/Applications
GarageBand
GarageBand is a line of digital audio workstations for macOS, iPadOS, and iOS devices that allows users to create music or podcasts.
iMovie
iMovie is a video editing software application developed by Apple Inc. for macOS and iOS devices. iMovie was released in 1999.
Safari
Safari is the default web browser included with macOS since version 10.3 "Panther". It uses the WebKit browser engine.
/System/Applications
App Store
The App Store is macOS's digital distribution platform for macOS apps, created and maintained by Apple Inc. The platform was announced on October 20, 2010, at Apple's "Back to the Mac" event. First launched on January 6, 2011, as part of the free Mac OS X 10.6.6 update for all current Snow Leopard users, Apple began accepting app submissions from registered developers on November 3, 2010, in preparation for its launch. After 24 hours of release, Apple announced that there were over one million downloads.
Automator
Automator is an app used to create workflows for automating repetitive tasks into batches for quicker alteration via point-and-click (or drag and drop). This saves time and effort over human intervention to manually change each file separately. Automator enables the repetition of tasks across a wide variety of programs, including Finder, Safari, Calendar, Contacts and others. It can also work with third-party applications such as Microsoft Office, Adobe Photoshop or Pixelmator. The icon features a robot holding a pipe, a reference to pipelines, a computer science term for connected data workflows. Automator was first released with Mac OS X Tiger (10.4).
Automator provides a graphical user interface for automating tasks without knowledge of programming or scripting languages. Tasks can be recorded as they are performed by the user or can be selected from a list. The output of the previous action can become the input to the next action.
Automator comes with a library of Actions (file renaming, finding linked images, creating a new mail message, etc.) that act as individual steps in a Workflow document. A Workflow document is used to carry out repetitive tasks. Workflows can be saved and reused. Unix command line scripts and AppleScripts can also be invoked as Actions. The actions are linked together in a Workflow. The Workflow can be saved as an application, Workflow file or a contextual menu item. Options can be set when the Workflow is created or when the Workflow is run. A workflow file created in Automator is saved in /Users/{User Name}/Library/Services.
Starting in macOS Monterey, Automator is being replaced by Shortcuts.
The icon for Automator features a robot, known as Otto the Automator.
Books
Calculator
Calculator is a basic calculator application made by Apple Inc. and bundled with macOS. It has three modes: basic, scientific, and programmer. Basic includes a number pad, buttons for adding, subtracting, multiplying, and dividing, as well as memory keys. Scientific mode supports exponents and trigonometric functions, and programmer mode gives the user access to more options related to computer programming.
The Calculator program has a long history going back to the very beginning of the Macintosh platform, where a simple four-function calculator program was a standard desk accessory from the earliest system versions. Though no higher math capability was included, third-party developers provided upgrades, and Apple released the Graphing Calculator application with the first PowerPC release (7.1.2) of the Mac OS, and it was a standard component through Mac OS 9. Apple currently ships a different application called Grapher.
Calculator has Reverse Polish notation support, and can also speak the buttons pressed and result returned.
The Calculator appeared first as a desk accessory in first version of Macintosh System for the 1984 Macintosh 128k. Its original incarnation was developed by Chris Espinosa and its appearance was designed, in part, by Steve Jobs when Espinosa, flustered by Jobs's dissatisfaction with all of his prototype designs, conceived an application called The Steve Jobs Roll Your Own Calculator Construction Set that allowed Jobs to tailor the look of the calculator to his liking. Its design was maintained with the same basic math operations until the final release of classic Mac OS in 2002.
A Dashboard Calculator widget is included in all versions of macOS from Mac OS X Tiger onwards. It only has the basic mode of its desktop counterpart. Since the release of OS X Yosemite, there is also a simple calculator widget available in the notifications area.
Since the release of Mac OS X Leopard, simple arithmetic functions can be calculated from Spotlight feature. They include the standard addition, subtraction, division, multiplication, exponentiation and the use of the percent sign to denote percentage.
Calendar
Calendar is a personal calendar app made by Apple Inc. that runs on both the macOS desktop operating system and the iOS mobile operating system. It offers online cloud backup of calendars using Apple's iCloud service, or can synchronize with other calendar services, including Google Calendar and Microsoft Exchange Server.
Chess
Apple Chess is a 3D chess game for macOS, developed by Apple Inc. as a fork of GNOME Chess (formerly "glChess"). Its history dates back to OpenStep and Mac OS X 10.2. It supports chess variants such as crazyhouse and suicide chess. Apple redistributes the source code under its own Apple Sample Code License, after a special permission has been granted from the original authors of GNOME Chess (which is licensed under GPL3). Apple ships with the game also the Sjeng chess engine (GPL).
Contacts
Contacts, called Address Book before OS X Mountain Lion, is a computerized address book included with Apple Inc.'s macOS. It includes various synchronizing capabilities and integrates with other macOS applications and features.
Dictionary
Dictionary is an application that includes an in-built dictionary and thesaurus.
FaceTime
Find My
First added to macOS 10.15, it tracks location of compatible devices connected via iCloud.
Font Book
Home
Home is the front-end for Apple's HomeKit software framework.
Image Capture
Launchpad
Launchpad is an application launcher for macOS that was first introduced in Mac OS X Lion. Launchpad provides an alternative way to start applications in macOS, in addition to other options such as the Dock (toolbar launcher), Finder (file manager), Spotlight (desktop search) or Terminal (command-line interface).
Mail
Maps
Messages
Mission Control
Music
News
Notes
Notes is macOS's notetaking app. It had first been developed for iOS, but had first been introduced to macOS starting with OS X Mountain Lion. Its main function is provide a service for creating short text notes in the computer, as well as being able to be shared to other macOS or iOS devices via Apple's iCloud service.
Photo Booth
Photo Booth is a camera application for macOS. It utilizes the front iSight camera to take pictures and videos.
Photos
Photos is a photo management and editing application that was designed based on the in-built app released for iOS 8. On macOS, Photos was first introduced to OS X Yosemite users in the 10.10.3 update on April 8, 2015, replacing iPhoto.
Podcasts
Preview
QuickTime Player
The QuickTime player is an application that can play video and sound files.
Reminders
Task-managing app introduced to OS X 10.8 Mountain Lion and iOS 5.
Shortcuts
Shortcuts is a visual scripting application.
Siri
Introduced in macOS 10.12, Siri is a digital assistant that allows the user to interact with it to ask questions, make recommendations, and perform actions on the device. It had been previously included in iOS.
Stickies
Stocks
Stocks is an application that provides information regarding stocks of various companies around the world.
System Preferences
TV
TextEdit
Time Machine
Time Machine is an application where the user can back up their files.
Voice Memos
Voice Memos, introduced in macOS Mojave, is a basic application with the capability of recording audio. In addition to this, it allows several editing functions, such as trimming and overwriting.
/System/Applications/Utilities
Activity Monitor
Activity Monitor is a system monitor for the macOS operating system, which also incorporates task manager functionality. Activity Monitor appeared in Mac OS X v10.3, when it subsumed the functionality of the programs Process Viewer (a task manager) and CPU Monitor found in the previous version of OS X. In OS X 10.9, Activity Monitor was significantly revamped and gained a 5th tab for "energy" (in addition to CPU, memory, disk, and network).
AirPort Utility
AirPort Utility is a program that allows users to configure an AirPort wireless network and manage services associated with and devices connected to AirPort Routers. It comes pre-installed on macOS, and is available to download for Microsoft Windows and iOS. AirPort Utility is unique in that it offers network configuration in a native application as opposed to a web application. It provides a graphical overview of AirPort devices attached to a network, and provides tools to manage each one individually. It allows users to configure their network preferences, assign Back to My Mac accounts to the network, and configure USB attached Printers and hard drives. The current versions are 6.3.6 for recent versions of macOS, 5.6.1 for Microsoft Windows and older versions of Mac OS X, and 1.3.4 for iOS.
On January 30, 2013, Apple released AirPort Utility 6.0 for macOS featuring a redesign of the user interface focused on increasing usability for novice users. Reception was mixed with some media outlets reporting IT professionals and network administrators being frustrated over some removed features. It was reported that most end users, however, wouldn't notice the feature omissions. Users requiring the removed features can still access the previous version of AirPort Utility using a workaround.
Audio MIDI Setup
The Audio MIDI Setup utility is a program that comes with the macOS operating system for adjusting the computer's audio input and output configuration settings and managing MIDI devices.
It was first introduced in Mac OS X 10.5 Leopard as a simplified way to configure MIDI Devices. Users need to be aware that prior to this release, MIDI devices did not require this step, and it mention of it might be omitted from MIDI devices from third-party manufactures.
Bluetooth File Exchange
Bluetooth File Exchange is a utility that comes with the macOS operating system, used to exchange files to or from a Bluetooth-enabled device. For example, it could be used to send an image to a cellphone, or to receive an image or other documents from a PDA.
Boot Camp Assistant
Assists users with installing Windows on their Mac using Boot Camp. Boot Camp does not support Macs with Apple silicon processors. This is because Windows 10 does not currently have a commercial version of Windows 10 that runs on ARM based processors.
ColorSync Utility
ColorSync Utility is software that ships with macOS. It is used for management of color profiles and filters used in Apple's PDF workflow, or applying filters to PDF documents.
The interface is composed of two parts: the document browser and the utility window. The document browser lets the user zoom in and out of an image or apply a Filter to it. The utility window has several options: Profile First Aid, Profiles, Devices, Filters and Calculator.
Profile First Aid allows the user to repair ColorSync color profiles so they conform to the International Color Consortium specification.
Profiles allows the user to browse the profiles installed on the system, grouped by location, class or space, and graphically compare any two profiles.
The profile map is displayed as a rotatable, scalable 3D object and can be plotted in CIELAB, CIELUV, YXY, YUV and CIEXYZ.
The Devices section allows the user to see a list of all registered ColorSync devices such as displays and printers, and see what ColorSync profile is applied to each one. It is also possible to override the default setting.
The Filters section allows the user to build and modify PDF filters that are available to the rest of the operating system. Each filter can be set to appear in one of three domains: Application, PDF Workflows, and Printing.
Filters set to Printing will appear in the drop-down menu under the "Save as PDF..." button in the standard Mac OS X print dialog box. Filters set to PDF Workflow will appear in the Quartz Filters drop-down menu in the ColorSync section of a print dialog box. The default filters that ship with Mac OS X are:
Black & White
Blue Tone
Create Generic PDFX-3 Document
Gray Tone
Lightness Decrease
Lightness Increase
Reduce File Size
Sepia Tone
User-created filters can have color management, image effects, PDF retouch, domain selection and comments.
The Color Management section allows assigning a profile, choosing a default profile, rendering intent, converting to a profile or intermediate transform.
The Intermediate Transform section allows adjustment of brightness, tint, hue, saturation, bilevel (high pass filter) or profile assignment, to either grayscale, RGB or CMYK, or all data in the file. This can be applied to either text, graphics, images or shading.
Complex filters can be created by stacking multiple effects. Any changes made to the PDF file can then be saved as a new PDF file.
Calculator can convert between RGB, CMYK and other color value schemes, and features an interactive color-picker for identifying a color on the screen, duplicating a feature of another bundled utility, DigitalColor Meter.
ColorSync is Apple Inc.'s color management API for the Classic Mac OS and macOS. Apple developed the original 1.0 version of ColorSync as a Mac-only architecture, which made it into an operating system release in 1993. In the same year, Apple co-founded the International Color Consortium (ICC) to develop a cross-platform profile format which became part of ColorSync 2.0. The system Color Management Module (CMM) was "LinoColorCMM", which was developed by Linotype-Hell AG (now part of Heidelberger Druckmaschinen AG). The same CMM was used in Microsoft Windows 2000 and XP under the rubric of "Image Color Management" (ICM). Apple, with the help of Adobe had ported ColorSync 2.0 and its SDK to Microsoft Windows. With ColorSync 3.0, the Windows version which was initially planned was discontinued. ColorSync 4.0 is the latest version, introduced in Mac OS X 10.1.
Human color perception is a very complex and subtle process, and different devices have widely different color gamuts or ranges of color they can display. To deal with these issues, ColorSync provides several different methods of doing color matching. For instance, perceptual matching tries to preserve as closely as possible the relative relationships between colors, even if all the colors must be systematically distorted in order to get them to fit within the gamut of the destination device. Because the human eye is more sensitive to color differences rather than absolute colors, this method tends to produce the best-looking results, subjectively speaking, for many common uses, but there are other methods that work better in some cases. (This set of rendering intents is part of the ICC system, and is available on all systems with ICC.)
As dictated by the ICC system, the profile connection space in ColorSync is the CIE XYZ color space. All image input and output devices (scanners, printers, displays) have to be characterized by providing an ICC profile that defines how their color information is to be interpreted relative to this reference color space. This profile might be provided by the device manufacturer, but for better quality results, it might be generated by performing actual measurements on the device with a colorimeter. Thus, when an image is scanned on a scanner, the image file will include a copy of the scanner's profile to characterize the meaning of its color information. Then, before the image is sent to an output device, a matching process converts the color information at the time of rendering from the source profile (that attached to the image) to the destination profile (that attached to the output device) so that the resulting colors print or display as closely as possible to the original image.
Console
Console is a log viewer developed by Apple Inc. and included with macOS. It allows users to search through all of the system's logged messages, and can alert the user when certain types of messages are logged. The Console is generally used for troubleshooting when there is a problem with the computer. macOS itself, as well as any applications that are used, send a constant stream of messages to the system in the form of log files. The console allows users to read the system logs, help find certain ones, monitor them, and filter their contents.
Clicking on "Show Log List" in the toolbar will bring up the Log List. The Log List opens a sidebar which shows all of the different logs that the system maintains. This list helps in viewing the many different logs maintained in various parts of the system by bringing them all together to one place. By clicking on a particular log category, all of the logs will be shown.
The System Log Queries contains all of the logs that have to do with the entire system. This includes system logs as well as individual application logs.
Selecting All Messages gives a live look at your computer's activities, updated live. This includes all activities from both the system as well as any applications running. Logs in this section of the Console are all formatted uniformly. They all include a timestamp, the name of the process or application, and the actual message of the log. When the message displayed includes a paperclip icon next to it, it means that it is a shortened version of a longer report, and clicking the icon will show the complete report.
In addition to viewing all messages, users can also create custom queries with any criteria that they like. These custom queries will filter the messages and will also be shown in the All Messages section. In order to make a new query, choose "New System Log Query" from the File menu.
Digital Color Meter
Disk Utility
Grapher
Keychain Access
Migration Assistant
Migration Assistant is an application for migrating information from another computer to the computer in use. It may be from a Windows computer or a Mac.
Screenshot
Available in macOS Mojave (10.14) and above, the Screenshot app bundles features such as screen recording and taking screenshots. Screenshot is initialized whenever the user presses the keyboard shortcuts , , , or .
Script Editor
System Information
System Information is an application that shows the system information about a Macintosh product.
Terminal
VoiceOver Utility
VoiceOver is an application where the user can listen to spoken descriptions on the computer.
/System/Library/CoreServices
AddPrinter
AddressBookUrlForwarder
AirPlayUIAgent
AirPort Base Station Agent
AppleScript Utility
Automator Application Stub
Automator Installer
AVB Audio Configuration
Bluetooth Setup Assistant
BluetoothUIServer
CalendarFileHandler
Captive Network Assistant
Certificate Assistant
The Certificate Assistant is a utility for creating and verifying digital certificates.
ClimateProxy
Clock
ControlCenter
Control Center provides access to system controls, such as Wi-Fi, Bluetooth, and Sound, in a unified interface accessible from the menu bar. Some of these controls can be added to the menu bar by dragging them from Control Center. Additional components can be added in System Preferences. Available controls include:
Wi-Fi
Bluetooth
AirDrop
Do Not Disturb
Keyboard Brightness (available on Mac notebooks)
Screen Mirroring
Display
Sound
Now Playing
Accessibility Shortcuts
Battery (available on Mac notebooks)
Fast User Switching
ControlStrip
Control Strip provides controls in the Touch Bar for common settings like brightness and volume.
CoreLocationAgent
CoreLocationAgent is responsible for displaying authorization prompts to allow apps and widgets to access location services.
CoreServicesUIAgent
Database Events
DiscHelper
DiskImageMounter
DiskImageMounter is a CoreServices application used to mount disk images.
Dock
The Dock is the main method of launching applications on macOS.
Dwell Control
Dwell allows the pointer to be controlled using head or eye tracking technologies. A user can also trigger actions by dwelling or holding the pointer still for a specified amount of time which will trigger the programmed dwell actions.
EscrowSecurityAlert
Family (OSX)
Finder
Finder is the default file manager and graphical interface shell of macOS.
FolderActionsDispatcher
FolderActionsDispatcher is responsible for monitoring changes to the filesystem to run Folder Action scripts.
Games
HelpViewer
iCloud
Image Events
Install Command Line Developer Tools
This utility allows developers to easily install command line tools for building, creating, and editing Xcode projects through the terminal, but does not require Xcode to be installed.
It is possible to execute this program by running in the terminal.
Install in Progress
Install in Progress is another application included in macOS (and in its progenitors OPENSTEP and NeXTSTEP) that extracts and installs files out of .pkg packages. It was created by NeXT, and is now maintained by Apple Inc.
Installer Progress
Installer
JavaLauncher
KeyboardAccessAgent
KeyboardSetupAssistant
Keychain Circle Notification
Keychain Circle Notification is involved in iCloud Keychain syncing.
Language Chooser
LocationMenu
loginwindow
The loginwindow process displays the macOS login window at system startup if auto-login is not set, verifies login attempts, and launches login applications. It also implements the Force Quit window, restarts macOS user interface components (the Dock and Finder) if they crash, and handles the logout, restart, and shutdown routines.
Users are assigned their own loginwindow when they log in; if a loginwindow process belonging to a specific user is force quit, they will be logged out.
ManagedClient
macOS Setup Assistant
Setup Assistant is the app that allows users that have installed a fresh copy of macOS or have just bought a new Mac to set it up and configure important settings like Computer Accounts, Apple ID, iCloud, and Accessibility settings. It is also run after major macOS system upgrades.
Memory Slot Utility
NetAuthAgent
NetAuthAgent is an interface for authenticating with network file servers and selecting shares.
NotificationCenter
NotificationCenter is an application that displays notifications from apps and websites. Users access Notification Center by clicking the clock in the menu bar on macOS Big Sur or the Notification Center icon in earlier versions of macOS. Notification Center can be customized in System Preferences.
NowPlayingTouchUI
NowPlayingWidgetContainer
OBEXAgent
OBEXAgent is a server that handles Bluetooth access.
ODSAgent
ODSAgent is a server that handles remote disk access.
OSDUIHelper
OSDUIHelper displays on-screen graphics when certain settings, such as volume or display brightness, or adjusted.
PIPAgent
PIPAgent manages the picture-in-picture feature available in macOS Sierra and later.
Paired Devices
Pass Viewer
Photo Library Migration Utility
Photo Library Migration Utility migrates iPhoto and Aperture libraries to Photos.
PodcastsAuthAgent
PowerChime
PowerChime, present on some MacBook models, is a macOS service that plays a chime when plugged in to power.
Problem Reporter
ProfileHelper
RapportUIAgent
rcd
RegisterPluginIMApp
ReportPanic
ReportPanic is an app that displays a window when the system reboots from a kernel panic; it allows the user to send a report to Apple.
Rosetta 2 Updater
Screen Time
screencaptureui
screencaptureui draws the user interface shown when taking a screenshot.
ScreenSaverEngine
ScreenSaverEngine is an application that handles screen saver access
Script Menu
ScriptMonitor
Siri
Siri is the virtual assistant that is part of Apple's iOS, iPadOS, watchOS, macOS, and tvOS operating systems. Siri uses voice queries and a natural-language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services.
Siri was launched to macOS in 2016 with the release of macOS Sierra (10.12).
Software Update
Software Update is a section in System Preferences for Mac Software Updates, as well as updates to core Mac apps, starting in macOS Mojave (10.14); it also has an item in the Apple menu. From OS X Mountain Lion (10.8) to macOS High Sierra (10.13), the Mac App Store was used for Software Updates; prior to that, Software Update was a separate utility, which could be launched from the Apple menu or from the Software Update pane in System Preferences.
SpacesTouchBarAgent
Spotlight
Spotlight is macOS's selection-based search system, used for indexing documents, pictures, music, applications, and System Preferences within the computer.
System Events
SystemUIServer
SystemUIServer manages status items in the menu bar.
TextInputMenuAgent
TextInputSwitcher
ThermalTrap
ThermalTrap notifies users when the system temperature exceeds a usable limit.
UIKitSystem
UniversalAccessControl
UnmountAssistantAgent
UnmountAssistantAgent displays a dialog if there is a process preventing ejection of a disk and offers to forcibly eject the disk if the process cannot be quit.
UserNotificationCenter
VoiceOver
WatchFaceAlert
WidgetKit Simulator
/System/Library/CoreServices/Applications
About This Mac
About This Mac shows information about the Mac it is running on, such as the hardware, serial number, and macOS version.
Archive Utility
Archive Utility (BOMArchiveHelper until Mac OS X 10.5) is the default archive file handler in macOS. It is usually invoked automatically when opening a file in one of its supported formats. It can be used to create compressed ZIP archives by choosing "Create archive of 'file'" (Leopard: "Compress") in the Finder's File or contextual menu. It is located at /System/Library/CoreServices/Applications/Archive Utility.app in Mac OS X 10.10, /System/Library/CoreServices/Archive Utility.app in 10.5 and later, and /System/Library/CoreServices/BOMArchiveHelper.app in 10.4 Prior to Archive Utility's inclusion in Mac OS X v10.3, beginning with Mac OS 7.6, Apple bundled the freeware StuffIt Expander with the operating system.
Invoking Archive Utility manually shows a minimal GUI letting the user change Archive Utility preferences or choose files to compress or uncompress.
BOM is an abbreviation of Bill of Materials. Bill of Materials files or .bom files are used by the macOS Installer program to document where files in an installer bundle are installed, what their file permissions should be, and other file metadata. Thus, a Bill of Materials is read by the Installer, and Archive Utility helps it by extracting the files specified in the BOM.
DVD Player
DVD Player (formerly Apple DVD Player) is the default DVD player in macOS. It supports all the standard DVD features such as multiple audio, video & subtitle tracks as well as Dolby Digital 5.1 passthrough, DVD access URLs and closed captions. In some instances, users can choose which VOB file to open. DVD Player is also fully compatible with DVDs authored by DVD Studio Pro and iDVD, including HD DVDs by DVD Studio Pro.
Directory Utility
Expansion Slot Utility
Expansion Slot Utility allows manual allocation of PCIe card bandwidth. It is only available on certain Mac Pro models.
Feedback Assistant
Folder Actions Setup
iOS App Installer
iOS App Installer is a app that downloads .ipa files for iPadOS applications so that they can be run on Apple silicon-based Macs.
Network Utility
Screen Sharing
Screen Sharing is a basic Mac utility that may be used to control remote computers and access their files. To connect, one may enter a VNC or Apple ID and authenticate as a local user on the remote computer, or, if the computers are linked via the same Apple ID, automatically initialise the connection. It supports features such as a shared clipboard between the two computers and remotely transferring files.
The feature must be enabled in the Sharing preference pane in System Preferences.
Storage Management
Ticket Viewer
Ticket Viewer is an app that displays Kerberos tickets.
Wireless Diagnostics
Other applications and accessories
Crash Reporter
Crash Reporter is the standard crash reporter in macOS. Crash Reporter can send the crash logs to Apple Inc. for their engineers to review.
Crash Reporter has three modes of operations:
Basic — The default mode. Only application crashes are reported, and the dialog does not contain any debugging information.
Developer — In addition to application crashes, crashes are also displayed for background and system processes.
Server — The default for macOS Server systems. No crash reports are shown to the user (though they are still logged).
None — Disables the dialog prompt. Crash reports are neither displayed nor logged.
The developer tool CrashReporterPrefs can be used to change modes, as can using the terminal command defaults write com.apple.CrashReporter DialogType [basic|developer|server].
In basic mode, if Crash Reporter notices an application has crashed twice in succession, it will offer to rename the application's preference file and try again (corrupted preference files being a common cause of crashes).
When reporting a crash, the top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log into their e-mail client to send to a third-party application developer for the developer to use.
Directory Access
Internet Connect
NetInfo Manager
ODBC Administrator
Printer Setup Utility
Older applications
Classic
The Classic Environment, usually referred to as Classic, is a hardware and software abstraction layer in PowerPC versions of Mac OS X that allows most legacy applications compatible with Mac OS 9 to run on Mac OS X. The name "Classic" is also sometimes used by software vendors to refer to the application programming interface available to "classic" applications, to differentiate between programming for Mac OS X and the classic version of the Mac OS.
The Classic Environment is supported on PowerPC-based Macintosh computers running versions of Mac OS X up to 10.4 "Tiger", but not with 10.5 "Leopard" or Macintoshes utilizing any other architecture than PowerPC.
The Classic Environment is a descendant of Rhapsody's "Blue Box" virtualization layer, which served as a proof of concept. (Previously, Apple A/UX also offered a virtualized Mac OS environment on top of a UNIX operating system.) It uses a Mac OS 9 System Folder, and a New World ROM file to bridge the differences between the older PowerPC Macintosh platforms and the XNU kernel environment. The Classic Environment was created as a key element of Apple's strategy to replace the classic Mac OS (versions 9 and below) with Mac OS X as the standard operating system (OS) used by Macintosh computers by eliminating the need to use the older OS directly.
The Classic Environment can be loaded at login (for faster activation when needed later), on command, or whenever a Mac OS application that requires it is launched (to reduce the use of system resources when not needed). It requires a full version of Mac OS 9 to be installed on the system, and loads an instance of that OS in a sandbox environment, replacing some low-level system calls with equivalent calls to Mac OS X via updated system files and the Classic Support system enabler. This sandbox is used to launch all "classic" Mac OS applications—there is only one instance of the Classic process running for a given user, and only one user per machine may be running Classic at a time.
If the user chooses to launch the Classic Environment only when needed, launching a "classic" application first launches the Classic Environment, which can be configured to appear in a window resembling the display of a computer booting into Mac OS 9. When the Classic Environment has finished loading, the application launches. When a "classic" application is in the foreground, the menu bar at the top of the screen changes to look like the older Mac OS system menu. Dialog boxes and other user-interface elements retain their traditional appearance.
The Classic Environment provides a way to run "Classic" applications on Apple's G5 systems as well as on most G4 based computers sold after January 2003. These machines cannot boot Mac OS 9 or earlier without the bridging capabilities of the Classic Environment or other software (see SheepShaver).
The Classic Environment's compatibility is usually sufficient for many applications, provided the application using it does not require direct access to hardware or engage in full-screen drawing. However, it is not a complete clone of Mac OS 9. The Finder included with Mac OS X v10.2 and later does not support the "Reveal Object" Apple events used by some Mac OS 9 applications, causing the "Reveal In Finder" functionality for those applications to be lost. Early releases of Mac OS X would often fail to draw window frames of Classic applications correctly, and after the Classic Environment's windowing was made double buffered in Mac OS X Panther, some older applications and games sometimes failed to update the screen properly, such as the original Macintosh port of Doom. However, the Classic Environment "resurrected" some older applications that had previously been unusable on the Macintosh Quadra and Power Macintosh series; this is because Mac OS X replaced Mac OS 9's virtual memory system with a more standard and less fragile implementation.
The Classic Environment's performance is also generally acceptable, with a few exceptions. Most of an application is run directly as PowerPC code (which would not be possible on Intel-based Macs). Motorola 68k code is handled by the same Motorola 68LC040 emulator that Mac OS 9 uses. Some application functions are actually faster in the Classic Environment than under Mac OS 9 on equivalent hardware, due to performance improvements in the newer operating system's device drivers. These applications are largely those that use heavy disk processing, and were often quickly ported to Mac OS X by their developers. On the other hand, applications that rely on heavy processing and which did not share resources under Mac OS 9's co-operative multitasking model will be interrupted by other (non-Classic) processes under Mac OS X's preemptive multitasking. The greater processing power of most systems that run Mac OS X (compared to systems intended to run Mac OS 8 or 9) helps to mitigate the performance degradation of the Classic Environment's virtualization.
Dashboard
Dashboard was an application for Apple Inc.'s macOS operating systems, used as a secondary desktop for hosting mini-applications known as widgets. These are intended to be simple applications that do not take time to launch. Dashboard applications supplied with macOS include a stock ticker, weather report, calculator and notepad; users can create or download their own. Before Mac OS X 10.7 Lion, when Dashboard is activated, the user's desktop is dimmed and widgets appear in the foreground. Like application windows, they can be moved around, rearranged, deleted, and recreated (so that more than one of the same Widget is open at the same time, possibly with different settings). New widgets can be opened, via an icon bar on the bottom of the layer, loading a list of available apps similar to the iOS homescreen or the macOS Launchpad. After loading, the widget is ready for use.
Dashboard was first introduced in Mac OS X 10.4 Tiger. It can be activated as an application, from the Dock, Launchpad or Spotlight. It can also be accessed by a dashboard key. Alternatively, the user can choose to make Dashboard open on moving the cursor into a preassigned hot corner or keyboard shortcut. Starting with Mac OS X 10.7 Lion, the Dashboard can be configured as a space, accessed by swiping four fingers to the right from the Desktops either side of it. In OS X 10.10 Yosemite, the Dashboard is disabled by default, as the Notification Center is now the primary method of displaying widgets.
As of macOS 10.15 Catalina, Dashboard has been removed from macOS.
Grab
Grab is a utility program in macOS for taking screenshots. It supports capturing a marquee selection, a whole window, and the whole screen, as well as timed screenshots. The program originated in OPENSTEP and NeXTSTEP, and continued to be preinstalled by Apple on macOS until version 10.13 (High Sierra). It was replaced by the utility Screenshot in macOS 10.14 (Mojave). On macOS versions 10.13 and earlier, Grab is found in the folder Utilities, which is a subdirectory of Applications. It may be quickly opened by using the Spotlight function when entering grab, or by pressing ⌘ Cmd+⇧ Shift+G and typing /Applications/Utilities/Grab.app in application Finder. It was previously also found in the Finder menu under Services > Grab.
Grab saves screenshots in the Tagged Image File Format (TIFF). In macOS, it is also possible to save screenshots directly to the Desktop in PDF format (earlier versions of macOS) or PNG format (later versions), using keystrokes shown below. For DRM reasons, it is not possible to use this software while DVD Player is open.
Grab helps determine the size of an element on the screen. After using the selection feature and capturing the screen, one can select Inspector from the menu or press ⌘ Cmd+1 (or ⌘ Cmd+I); a dialog box will appear with the dimensions of the selected area.
iDVD
iDVD is a discontinued DVD-creation application.
iSync
iSync is a software application first released by Apple Inc. on Jan 2, 2003. Apple licensed the core technology from fusionOne. It ran only under Mac OS X and was used to synchronize contact and calendar data from Address Book and iCal with many non-Apple SyncML-enabled mobile phones via a Bluetooth or USB connection. Support for many (pre-October 2007) devices was built-in, with newer devices being supported via manufacturer and third-party iSync Plugins. Support for Palm OS organizers and compatible smartphones was removed with the release of iSync 3.1 and Mac OS X 10.6 Snow Leopard. BlackBerry OS, Palm OS, and Windows Mobile (Pocket PC) devices could not be used with iSync, but were supported by third-party applications. Before the release of Mac OS X 10.4, iSync also synchronized a user's Safari bookmarks with the then .Mac subscription service provided by Apple. iSync was removed from Mac OS X in version 10.7 (Lion). However, since the underlying framework still existed in Lion and 10.8 (Mountain Lion), it was possible to restore the functionality of iSync using a 10.6 (Snow Leopard) installation or backup
iTunes
iTunes is a media player, media library, Internet radio broadcaster, mobile device management utility, and the client app for iTunes Store. It is used to purchase, play, download, and organize digital multimedia, on personal computers running the macOS and Windows operating systems. iTunes is developed by Apple Inc. It was announced on January 9, 2001.
Because iTunes was criticized for having a bloated user experience, Apple decided to split iTunes into separate apps as of macOS Catalina: Music, Podcasts, and TV. Finder would take over the device management capabilities. This change would not affect Windows or older macOS versions.
Sherlock
Sherlock, named after fictional detective Sherlock Holmes, was a file and web search tool created by Apple Inc. for the PowerPC-based "classic" Mac OS, introduced with Mac OS 8 as an extension of the Mac OS Finder's file searching capabilities. Like its predecessor (System 7.5’s totally revamped 'Find File' app, adapted by Bill Monk from his 'Find Pro' shareware find program[1]), Sherlock searched for local files and file contents, using the same basic indexing code and search logic found in AppleSearch. Sherlock extended the system by enabling the user to search for items through the World Wide Web through a set of plugins which employed existing web search engines. These plugins were written as plain text files, so that it was a simple task for a user to write a Sherlock plugin. Since most of the standard plug-ins for Sherlock provided by Apple itself no longer function, it was officially retired and removed in the release of Mac OS X 10.5 Leopard in 2007.
Software Update
In Mac OS 9 and earlier versions of Mac OS X, Software Update was a standalone tool. The program was part of the CoreServices in OS X. It could automatically inform users of new updates (with new features and bug and security fixes) to the operating system, applications, device drivers, and firmware. All updates required the user to enter their administrative password and some required a system restart. It could be set to check for updates daily, weekly, monthly, or not at all; in addition, it could download and store the associated .pkg file (the same type used by Installer) to be installed at a later date, and it maintained a history of installed updates. Starting with Mac OS X 10.5 Leopard, updates that required a reboot logged out the user prior to installation and automatically restarted the computer when complete. In earlier versions of OS X, the updates were installed, but critical files were not replaced until the next system startup.
Beginning with OS X 10.8, Software Update became part of the App Store application. Beginning with macOS Mojave (10.14), it became part of System Preferences.
X11
In Mac OS X Tiger, X11 was an optional install included on the install DVD. Mac OS X Leopard, Snow Leopard and Lion installed X11 by default, but from OS X Mountain Lion (10.8), Apple dropped dedicated support for X11, with users directed to the open source XQuartz project (to which it contributes) instead.
Development tools
Server technology
Core components
AppleScript
Aqua
Audio Units
Bonjour
Boot Camp
Carbon
Cocoa
Core Animation
Core Audio
Core Data
Core Image
Core Video
Darwin
Mission Control
Keychain
OpenGL
plist
Quartz
QuickTime
Rosetta
Smart folder
Spaces
WebKit
XNU
References
components
macOS components |
41237 | https://en.wikipedia.org/wiki/Hierarchical%20routing | Hierarchical routing | Hierarchical routing is a method of routing in networks that is based on hierarchical addressing.
Background
Most Transmission Control Protocol/Internet Protocol (TCP/IP) routing is based on a two-level hierarchical routing in which an IP address is divided into a network portion and a host portion. Gateways use only the network portion until an IP datagram reaches a gateway that can deliver it directly. Additional levels of hierarchical routing are introduced by the addition of subnetworks.
Description
Hierarchical routing is the procedure of arranging routers in a hierarchical manner. A good example would be to consider a corporate intranet. Most corporate intranets consist of a high speed backbone network. Connected to this backbone are routers which are in turn connected to a particular workgroup. These workgroups occupy a unique LAN. The reason this is a good arrangement is because even though there might be dozens of different workgroups, the span (maximum hop count to get from one host to any other host on the network) is 2. Even if the workgroups divided their LAN network into smaller partitions, the span could only increase to 4 in this particular example.
Considering alternative solutions with every router connected to every other router, or if every router was connected to 2 routers, shows the convenience of hierarchical routing. It decreases the complexity of network topology, increases routing efficiency, and causes much less congestion because of fewer routing advertisements. With hierarchical routing, only core routers connected to the backbone are aware of all routes. Routers that lie within a LAN only know about routes in the LAN. Unrecognized destinations are passed to the default route.
References
External links
http://www.isi.edu/nsnam/ns/doc-stable/node310.html
Routing |
25569041 | https://en.wikipedia.org/wiki/Blue%20Coat%20Systems | Blue Coat Systems |
Blue Coat Systems was a company that provided hardware, software, and services designed for cybersecurity and network management. In 2016, it was acquired by and folded into Symantec.
The company was known as CacheFlow until 2002.
The company had "a broad security portfolio including hardware, software and services." The company was best known for web gateway appliances that scan internet traffic for security threats, authenticate users and manage encrypted traffic, as well as products to monitor and filter employee internet activity. It also produced consumer products, such as parental control software. The company's products were initially sold to internet service providers, but later products were intended for large companies.
History
In March 1996, the company was founded as CacheFlow, Inc. in Redmond, Washington by Michael Malcolm, a computer scientist and professor at the University of Waterloo, Joe Pruskowski, and Doug Crow. The company initially raised $1 million in seed money from a dozen angel investors.
The company's goal was to develop appliances that would increase website speed by storing frequently accessed web data in the cache.
In October 1996, Benchmark Capital purchased 25% of the company for $2.8 million, equivalent to a price of 87.5 cents per share. By February 1997, the company had raised $5.1 million. In December 1997, U.S. Venture Partners acquired 17% of the company for $6 million.
In 1997, the company moved its headquarters to Silicon Valley.
In January 1998, the company released its first product, the CacheFlow 1000. It cached website objects that users were likely to use repeatedly, to increase load speed. In October 1999, the CacheOS operating system was released.
In April 1998, the company released the CacheFlow 100.
In mid-1998, during the dot-com bubble, the company made its first sales, earning just $809,000 over three months, and investors started pushing for an initial public offering (IPO).
In March 1999, Technology Crossover Ventures invested $8.7 million for 7% of the company, equivalent to a price of $4.575 per share.
In September 1999, the company released the CacheFlow 500, with a list price of $9,995. A competitive review of caching appliances in PC Magazine gave the CacheFlow 500 Editor's Choice. The editor said it had "excellent performance", a "plug-and-go" setup, and "good management tools. The review noted that its "most noteworthy features" were its DNS caching and object pipelining techniques, which allowed page data to be delivered in parallel, rather than sequential, streams.
In early November 1999, Marc Andreessen invested $3.1 million in shares, at a price of $11.04 per share.
On November 19, 1999, during the peak of the dot-com bubble, the company became a public company via an initial public offering, raising $132 million. Shares rose fivefold on its first day of trading. However, the company was not profitable and its product was unproven. At that time, the company had $3.6 million in revenue and $6.6 million in losses in the prior 3 months. The company had initially hoped to price its shares at $11-13 each and raise $50 million but due to strong demand, it priced its shares at $26 each. One month later, they traded as high as $182 each.
In 1999, board of directors member Andrew Rachleff of Benchmark Capital, an investor, brought in Brian NeSmith, who had just sold his company, Ipsilon Networks, to Nokia for $120 million, as chief executive officer.
In 2000, it introduced the CacheFlow Server Accelerator product family, which offloads content delivery tasks from web servers. Tests by Network World found the Server Accelerator 725 increased website load speed eight-fold. The CacheFlow Client Accelerator 600 product family was also introduced in 2000. It was the company's first product family for corporate networks, caching Web or multimedia content directly on the corporate network.
Revenues grew from $7 million in 1998, to $29 million in 1999 and $97 million in 2001. It already had 25% of the overall market for caching and 35% of the caching appliance market and was still not profitable.
In March 2000, the company integrated its products with those of Akamai Technologies.
In May 2000, the company updated CacheFlow OS to cache multimedia content, including RealNetworks' RealSystem, Microsoft's Windows Media and Apple's Quicktime formats.
In June 2000, the company acquired Springbank Networks, an internet device company, for $180 million.
In December 2000, the company acquired Entera, a digital content streaming company, for $440 million.
In 2001, new features specifically for streaming media were introduced under the name "cIQ".
Later in 2001, the company was renamed to Blue Coat Systems to focus on security appliances and simultaneously released the SG800. The appliance sat behind corporate firewalls to filter website traffic for viruses, worms and other harmful software. It had a custom operating system called Security Gateway and provided many of its security features through partners, like Symantec and Trend Micro.
The company lost 97% of its value from October 2000 to March 2001 as the dot-com bubble burst. The company continued to lose money. By 2002, several competing internet caching companies had abandoned the market, due to slow adoption of caching technology and most of CacheFlow's revenues came from its IT security products.
In February 2002, the company closed its Redmond, WA office and laid off about 60 workers there. Around a dozen Redmond workers were offered transfers to the Sunnyvale, CA office.
In April 2002, the company launched fifth version of its operating system. The Security Gateway 600/6000 Series was the company's newest product family. It had a range of security features, such as authentication, internet use policies, virus scanning, content filtering, and bandwidth restrictions for streaming video applications.
In August 2002, the company started adding IT security features.
Also in August 2002, the company changed its name to Blue Coat Systems. The new name was intended to evoke the image of a police officer or guard. It then focusing on internet security appliances. Its products were primarily used to control, monitor and secure internet use by employees. For example, a company could limit employee access to online gaming and video streaming, as well as scan attachments for viruses.
The shift in focus was followed by smaller losses and revenues at first and eventually company growth. Losses in 2002 after the rename were $247 million, about half of the prior year's losses.
By 2003, the company had 250 employees and $150 million in annual revenue.
In 2003, 3 new products were introduced for small and medium-sized businesses (SMBs) with 50, 100 or 250 users in bundles with Websense; and Secure Computing, although the Websense partnership was later at least partially dismantled and the two vendors were "slinging mud at each other" in the VAR community. This was followed by a second generation of the ProxySG product family, which added security features for instant messaging. A review in eWeek said the new ProxySG line was effective and easy to deploy, but the ongoing maintenance fees were expensive.
In October 2003, the company acquired Ositis Software, maker of antivirus appliances, for $1.36 million.
In July 2004, the company acquired Cerberian, a URL filtering company, for $17.5 million.
In 2005, the company introduced an anti-spyware appliance called Spyware Interceptor and the following year it announced upcoming WAN optimization products. This was followed by SSL-VPN security appliances to secure remote connections.
In 2005, the company was profitable for the first time.
In 2006, the company introduced a free web-tool, K9 Web Protection, that can monitor internet traffic, block certain websites, identify phishing scams. In a November 2008 review, PC World gave it 4.25 out of 5 stars.
In March 2006, the company acquired Permeo Technologies, an end point security company, for $60 million.
In June 2006, the company acquired NetCache assets from NetApp, which were involved in proxy caching, for $30 million.
In June 2008, the company acquired Packeteer, engaged in WAN optimization, for $268 million.
In 2009, the company introduced a plugin for PacketShaper to throttle applications such as Spotify. A review of the PacketShaper 12000 in IT Pro gave it four out of five stars. The review said that "you won't find superior WAN traffic management anywhere else," but "the hardware platform could be more up to date considering the price."
In November 2009, the company went through a restructuring that included layoffs of 280 of its 1,500 employees and the closing of facilities in Latvia, New Jersey and the Netherlands.
In February 2010, the company acquired S7 Software Solutions, an IT research and development firm based in Bangalore, for $5.25 million.
In August 2010, Michael J. Borman was named president and CEO of the company.
In August 2011, CEO Michael Borman was fired for failing to meet performance goals and was replaced with Greg Clark.
In December 2011, Thoma Cressey Bravo acquired the company for $1.3 billion.
In March 2012, David Murphy was named chief operating officer and president of the company.
In December 2012, the company acquired Crossbeam Systems, maker of the X-Series of security appliances.
In May 2013, the company acquired Solera Networks, involved in big data security.
In December 2013, the company acquired Norman Shark, an anti malware firm.
In 2014, Elastica's technology was incorporated into Blue Coat products, with its Audit subscription service being merged into the Blue Coat Appfeed in the ProxySG gateways.
In June 2015, the company acquired Perspecsys, involved in cloud security.
Also in March 2015, the company integrated technologies from its acquisitions of Norman Shark and Solera Networks to create a cloud-based product family called the Global Intelligence Network.
Also in March 2015, the company pressured security researcher Raphaël Rigo into canceling his talk at SyScan '15. Although Raphaël's talk did not contain any information about vulnerabilities on the ProxySG platform, the company still cited concerns that the talk was going to "provide information useful to the ongoing security assessments of ProxySG by Blue Coat." The canceling of the talk was met with harsh criticism by various prominent security researchers and professionals alike who generally welcome technical information about various security products that are widely used.
In March 2015, Bain Capital acquired the company from Thoma Bravo for $2.4 billion. Bain indicated that it hoped to launch another IPO and several months later, anonymous sources said the company was looking for investment banks for that purpose.
In August 2015, the company launched the Alliance Ecosystem of Endpoint Detection and Response (EDR) to share information about IT security threats across vendors and companies. Its first channel program was started that March and the Cloud Ready Partner Program was announced the following April.
In November 2015, the company acquired Elastica, involved in cloud security, for $280 million.
By 2015, the company had $200 million in annual profits and about $600 million in revenues, a 50% increase from 2011.
On June 2, 2016, the company once again filed plans for an initial public offering. However, on June 13, 2016, the company abandoned its IPO plans and announced that Symantec agreed to acquire the company for $4.65 billion. The acquisition was completed on August 1, 2016 and the company's products were integrated into those of Symantec.
Blue Coat CEO Greg Clark was named CEO of Symantec and Blue Coat COO Michael Fey was named president and COO of Symantec.
Use by repressive regimes
Blue Coat devices are what is known as a "dual-use" technology, because they can be used both to defend corporate networks and by governments to censor and monitor the public's internet traffic. The appliances can see some types of encrypted traffic, block websites or record website traffic.
In October 2011, the U.S. government examined claims made by Telecomix, a hacktivist group, that Syria was using Blue Coat Systems products for internet censorship. Telecomix released 54GB of log data alleged to have been taken from 7 Blue Coat web gateway appliances that depict search terms, including "Israel" and "proxy", that were blocked in the country using the appliances. The company later acknowledged that its systems were being used in Syria, but asserted the equipment was sold to intermediaries in Dubai for use by an Iraqi governmental agency.
Despite the systems consistently sending "heartbeat" pings directly back to Blue Coat, the company claimed not to be monitoring the logs to identify from which country an appliance is communicating. Blue Coat announced that it would halt providing updates, support and other services for systems operating in Syria. In April 2013, the Bureau of Industry and Security announced a $2.8 million civil settlement with Computerlinks FZCO, a Dubai reseller, for violations of the Export Administration Regulations related to the transfer to Syria of Blue Coat products.
The company's devices were also used to block the public from viewing certain websites in Bahrain, Qatar, and U.A.E., among others. By 2013, Citizen Lab had published 3 reports regarding the company's devices being found in countries known for using technology to violate human rights. It identified 61 countries using Blue Coat devices, including those known for censoring and surveilling their citizens' internet activity, such as China, Egypt, Russia, and Venezuela. However, "it remains unclear exactly how the technologies are being used, but experts say the tools could empower repressive governments to spy on opponents."
See also
Kaleidescape
References
1996 establishments in Washington (state)
1999 initial public offerings
2016 mergers and acquisitions
Business ethics
Software companies established in 1996
Software companies disestablished in 2016
Cyber-arms companies
Computer security software companies
Content-control software
Defunct software companies of the United States
Defunct companies based in the San Francisco Bay Area
Dot-com bubble
International information technology consulting firms
Server appliance
NortonLifeLock acquisitions |
35301204 | https://en.wikipedia.org/wiki/Satyamev%20Jayate%20%28talk%20show%29 | Satyamev Jayate (talk show) | Satyamev Jayate () is an Indian Hindi-language television talk show aired on various channels within Star Network along with Doordarshan's DD National. The first season of the show premiered on 6 May 2012 and marked the television debut of popular Bollywood actor and filmmaker Aamir Khan. The second season of the show was aired from 2 March 2014 and the third season started from 5 October 2014.
The show focuses on sensitive social issues prevalent in India such as female foeticide, child sexual abuse, rape, honour killings, domestic violence, untouchability, alcoholism, and the criminalization of politics. It aims to bring the great achievements of people which often go unnoticed in order to encourage the audience to achieve their goals no matter what comes in between. It also aims to empower citizens with information about their country, and urge them to take action. While the primary language of the show is Hindi, it is simulcast in eight languages such as Bengali, Malayalam, Marathi, Tamil and Telugu, and subtitled in English, to ensure maximum reach.
The first season of Satyamev Jayate garnered over a billion digital impressions from 165 countries, with responses from viewers in many countries, mainly Asian and African countries, including India, China, Djibouti, Seychelles, Sierra Leone, Isle of Man, and Papua New Guinea. A sum of was received as donations by the NGOs featured on this season. The second season was watched by 600 million Indians. The causes raised in the second season were supported by over 30million people and the season generated more than one billion impressions online.
Production
Concept
The concept of the show was not revealed in the Indian media until the show officially went on air on 6 May 2012. There were also no get-togethers, parties or press conferences organized to discuss the content, leading up to the premiere of the show. However, various sources reported the show to be based on "the common man" rather than being fictional. Also, based on its content, it was mostly referred as a talk show discussing social issues like child labour, health problems and other issues affecting the country. Khan, who is known for keeping secrecy for his movies, was quoted saying, "I don`t want to talk much about how the show will be, and about its format. I want everyone to see it directly on TV." However, commenting on the theme of the show, he said, "The show is about meeting the common man of India, connecting with India and its people." He also added, "Through this show we understand the problem of the people, we are not here to make a change. I am no one to change anything. I don`t think I am in the position to change anything else. I feel understanding a problem and feeling it or holding one's hand or hugging is also important. I may not have the solution, but at least I can hear and understand."
Development
Uday Shankar, CEO of STAR India, suggested Aamir Khan to venture into television. Khan, however, was hesitant at first but agreed and worked upon the concept of the show for two years. In an interview with Zee News, he said, "Initially I was scared to do the show as we were travelling on a different path. I was worried. It was in a way difficult, but we knew what we are doing is different, but it comes straight from our heart." He also added, "I cannot say I understand TV completely. I was earlier scared to go ahead with the project. I can only say I have made this show with complete honesty and without compromising on anything."
The shooting of the show took place in several states of India and Khan traveled extensively over several weeks to various places in Rajasthan, Kashmir, Kerala, Delhi, Punjab, and in the North-East. The studio portions of the show were shot in Vrundavan Studio and Yash Raj Studios in Mumbai. Khan was adamant about naming the show 'Satyameva Jayate' as he felt it completely gelled with the theme, which indicated that the show is of, for and belongs to people of India. However, Khan later learnt the fact that the title 'Satyameva Jayate' belonged to the country and cannot be registered for the copyrights as it cannot be exploited on a creative level for promotional activities. The team however went ahead and borrowed the title.
Soundtrack
Aamir Khan came up with the idea of launching a music album for the show. He along with Ram Sampath, the music director of Delhi Belly, created 13 songs for the 13 episodes that were telecast in the inaugural season of the show. In addition, the songs from the album of the show were released digitally every week on Hungama.com and also across all mobile operators as the series proceeds. The album was also released in Malayalam, Tamil, Telugu and Kannada. Sukhwinder Singh recorded a 22-minute-long song for the show which was written by Prasoon Joshi. On the song, Sukhwinder Singh said, "Yes, I Nishit Mehta, Swaroop Khan and many more have sung a song which will be used in the title track and some will be incorporated in the show. It is a dream song; it is a song which will reflect humanity and nationality."
Controversy
Palash Sen, the lead singer of the Indian rock band Euphoria sued and sent a legal notice to Ram Sampath, for plagiarism of the title track of the show. Sen commented that, "The trailers and the anthem of this show have been running on television for quite a while, but I had not seen the videos. When I heard it, I was shocked." He claimed that in the year 2000, his band Euphoria released its second album Phir Dhoom, and one of its songs was named "Satyameva Jayate". According to him, the refrain of Ram Sampath's track is exactly the same as the chants in his song. He went on to add that, "They've basically used the same refrain. Jo baaki ka gaana hai (the rest of the song), has different words and tune. But the mainstay of the song — the chorus – is ours." Sen also stated that he wouldn't have minded giving Sampath the permission to use the tune of his version of "Satyamev Jayate" had he asked him for his consent and in return he would have just asked for a small credit to the band for the song.
Broadcast
Satyamev Jayate was the first show of Indian television which aired simultaneously on a private channel network STAR and a national broadcaster Doordarshan. The show was dubbed in several languages including three southern Indian languages viz., Malayalam, Tamil and Telugu, along with Bengali and Marathi. Apart from Star Plus, the show was simulcast on STAR World (with English Subtitles), STAR Utsav, STAR Pravah, STAR Jalsha, Asianet and STAR Vijay within the STAR Network and other channels including DD National and Eenadu TV. Besides India, the show aired in over 100 countries around the world. On the broadcast, Khan commented, "This show is being launched on a large scale. I had to make complete use of this medium. This is a baby step in that direction."
Satyamev Jayate was dubbed into Kannada but was not broadcast in the language, due to a self-imposed and self-enforced ban by the Kannada film industry, which does not allow a dubbed version of any serial or movie to telecast in the state. The Kannada film industry, however, does approve Kannada remakes of the same. After the first episode, Shiv Sena chief Bal Thackeray requested the Karnataka government to allow the show to be aired. On the issue, Thackeray commented that "This show must not be tied up in parochial chains or barred from the public. This is our view." Khan also requested the Karnataka Film Chamber of Commerce to allow the show to be aired in Karnataka. The first episode in Kannada was later released on show's official site.
The show premiered on 6 May 2012 and was telecast on Sundays at 11 am IST thereafter. While the channel authorities were high on showcasing it in the prime time at 9 pm, Khan wanted to telecast it on Sunday mornings since the slot was commonly considered as "family TV viewing" time in Indian television after popular shows Mahabharata and Ramayana were aired on Sunday mornings and were highly successful. He was reported saying, "I wanted to telecast my show on Sunday morning. I want each family to watch the show and connect with it. We have watched Ramayana and Mahabharata and it used to come on Sunday morning. The shows created a different atmosphere." He further added that, "I don't understand TRPs and GRPs. I have no idea how to calculate it! I don't care about it. It is important to reach out to the Indian audiences. If viewers want to see, they will see or else it is up to them."
Special screening in villages
Star Plus organised a special screening of the first episode of the show in some villages in Gujarat, Maharashtra and Uttar Pradesh where the villagers do not have access to television. The initiative was said to have been taken to ensure Satyamev Jayate reaches all over the country as it caters to the issues of the common people of the country. The program was screened on 6 May 2012, during the same time it was aired across the country, on community TV sets in villages like Bhingara and Kahupatta in Maharashtra, Chepa in Gujarat, Jhunkar in Madhya Pradesh, Tikeri, Lalpur, Sarauta, Khannapurwa and Maniram in Uttar Pradesh. Most of these villages are reported to have a population of less than 5,000. Gayatri Yadav from STAR India stated that, "This is an important and relevant show for all of India and Star India is going all out to make sure that this show reaches out to all Indians even in places with limited or no TV connectivity." Based on the response to its first episode, the screening of subsequent episodes in a similar manner is being considered by STAR.
Sponsors
The show was presented and sponsored by Airtel and co-sponsored by Aqua guard. Other associate sponsors included Coca-Cola, Johnson & Johnson, Skoda Auto, Axis Bank, Berger Paints and Dixcy Scott. The title sponsorship was signed at around to , while associate sponsors paid around to . STAR India was reported to give exclusivity to its sponsors by not selling any advertising spots to any of its sponsors competitor's brand. Khan asked his brand managers not to buy any advertising slots or screen any of his advertisements during the show, fearing the dilution of its impact. The channel was reportedly paid a sum of for a 10-second advertising slot and had only 30 seconds of ad inventory left after the first episode went on air.
Marketing
Promotions
The teasers of the show were premiered on YouTube on 2 April 2012. The makers of show booked around 2,000 slots for the broadcaster's promos in 27 hours for an amount of . Reportedly, this was the highest costing promotional campaign for any Indian television show. Besides, the show was also promoted in Indian theaters. The theme song of the show was shown in 300 theaters across the country after the national anthem. In addition, interactive sessions were organized with audiences about the show in selected multiplexes of Mumbai and New Delhi. The responses of the audience were recorded and shown on screen during movie intervals. B. V. Rao of Aamir Khan Productions, claimed that "This is the first time that a TV show is being promoted in cinema halls." On 3 May 2012, Khan appeared on the daily soap Diya Aur Baati Hum, aired on Star Plus to promote the show.
Theme song
The promo song of the show was composed by Ram Sampath, written by Prasoon Joshi and sung by Keerthi Sagathia & Nishit Mehta. Initially planned to compose a national song or an anthem, the team, however, composed a romantic love song reflecting the love for the country and relating it with each Indian. The song was shot by Ram Madhvani in different states of India and was released on 13 April 2012.
Mobile application
An official software application for the show was developed by Hungama Digital Media Entertainment and was released on Apple Appstore for iOS devices including iPhone, iPad and iPod for approximately. The app allowed apple users to stream songs and videos of each of its episodes and supported social integration, donation through Airtel money service, and followed the shows official Twitter timeline. Upon release, the app surpassed Instagram to make it to the top 25 apps on the Appstore within a couple of days. The app also ranked number one in the entertainment category.
Reception
Critical response
The show opened to highly positive reviews. Several media organizations praised Aamir Khan for his effort and described the show as a movement. In her review, Ritu Singh of IBN Live stated, "Aamir Khan deserves an applause for bringing up such a sensitive issue and presenting it in a hard hitting way. The amount of research Aamir and his team has put into the show was clearly visible with the facts and figures presented. Every aspect of the issue was covered with great diligence." She concluded it by saying, "'Satyamev Jayate' is not just a show; it’s a movement to change people’s mindset." Parmita Uniyal from Hindustan Times praised the content and format of the show and said, "Aamir Khan have to step in to do what journalists are supposed to do – make a difference. The show is a classic example of that." Gayatri Sankar from Zee News described the show as an "eye opener" and commented that, "Satyamev Jayate will make you unlearn all the wrong you have learnt and discover that compassionate human your soul wishes to be. The show grips you and leaves you dumfounded! You will be left asking for more and would wish the show never ends." Trade analyst and film critic Komal Nahta commended that, "I cried while watching the show. I think people will watch it as it has touched an emotional chord." Neeraj Roy, managing director and chief executive of Hungama Digital Media Entertainment, also praised the show by commending "Brilliant effort. Well done Aamir Khan and Satyamev Jayate. We can make a difference." Sukanya Verma from Rediff.com expressed concerns regarding the show saying, "This is a grand initiative and a sound format into which a lot has been invested -- monetarily as well as in terms of research. Deriding this show simply because it is hosted by a Bollywood actor who is also a marketing whiz, questions our credibility, not his."
Some reviewers also criticized the show on various grounds. A review from Outlook India noted that, "[...] the show might well heighten awareness, enable the efforts of those doing real work at the ground level, and get the issue out of the denial closet, [...] But it is a little unrealistic to expect a film star and a TV show to change the world." Subhash Jha from The Times of India commented on the show, "...though brave and thought provoking, was disappointing in its lack of genuine connectivity between the host and the victims of social atrocity. At the moment Satyamev Jayate looks like a product of elitist conscientiousness." Sheela Bhatt, from Rediff.com, commented that the format of Satyameva Jayate has to be more profound, and the big problem of the show is that it is on predictable lines. She went on conclude by requesting Khan to bring in some raw energy in the show.
Viewers' response
The show also received positive feedback from various eminent personalities such as social activists, media houses, film and television personalities. Bal Thackeray, the founder and chief of the political party Shiv Sena praised the show and Khan for bringing out social issues in front of public. Prominent social activist and retired IPS officer Kiran Bedi described the show as "creative, evidence based, emotionally connecting and inspiring" while commenting that, "It is an expression of the power of media and the inherent potential of society in resolving its own problems."
Noted film actress Shabana Azmi appreciated the show for its research and emotional content: "Aamir Khan's show can bring a revolution. Thoroughly researched covers all aspects touches emotional chord n forces us to reexamine ourselves." While film producer Ekta Kapoor proclaimed the show as "the best show of the decade", film directors Madhur Bhandarkar and Farhan Akhtar also praised the show commenting that the show brought the "desired change" to the small screen and that it is "a show with a heart" respectively. Other high personalities who lauded the show included Salman Khan, Dilip Kumar Preity Zinta, Dia Mirza, Boman Irani, Neha Dhupia, Mandira Bedi, Kabir Bedi, Mini Mathur, Kabir Khan, Maria Goretti, Vishal Dadlani, Ken Ghosh, Harsha Bhogle & Pritish Nandy.
Apart from the critics, film, social and political personalities, the show was well received by the television viewers describing it as "a gutsy, hard-hitting and sensible program that strikes an emotional chord with the audiences." As per Indiantelevision.com, the show garnered an overall rating of 4.27 television ratings (TVR) (including terrestrial of DD) across the 6 metropolitan cities: Delhi, Mumbai, Kolkata, Chennai, Bangalore and Hyderabad, upon its premiere telecast on 6 May 2012. According to the Television Audience Measurement (TAM), the show reached out to 8.96 million people in the age group of 4+. One hundred thousand people tried calling in from across the country to speak to Khan during the show out of which 10 or 11 people could eventually get through.
On its premiere day, several topics related to the show were seen trending on the microblogging site Twitter, occupying the "top five trends" on the site. The show received as many as 2,254 tweets on the social networking site, even before the show ended its premiere telecast and reached to more than 3800 tweets on the day. The hashtag #SatyamevJayate began trending in the morning and continued to stay at the top through to the end of the day. Next, the show's official site, satyamevjayate.in, was crashed within minutes of the end of the first episode due to huge traffic. The site posted the message, "Thank you for your overwhelming response to Satyamev Jayate. Unfortunately our servers have crashed due to the traffic, will be back soon." Also Satyamev Jayate was the top search in India on Google Trends. As of May 2012, the show's official page on Facebook has received more than 1,451,202 likes of which 233,000 likes were received on the day of its premiere. The first episode also opened many discussions on video sharing site YouTube, wherein, people left emotional messages on the site. "A man who wanted a boy said that after watching the show he cried and he apologized to his wife and said that he just wants her to be happy," said Khan when talking to media. Hindustan Times conducted an online poll asking the viewers, "Did you like Aamir Khan's Satyamev Jayate?" to which 88% viewers agreed that they liked the show, 5% felt that Khan and the show failed to impress and 7% remained undecided, waiting to watch more episodes before forming their opinion.
Persistent Systems Ltd, an Indian IT consultancy, was the analytics (insights) partner for this show. Season1 of Satyamev Jayate garnered 1.2billion impressions on the web. It ranked as the most talked about new show on the social media in the world. Gigaom quoted that "Satyamev Jayate, one of India’s highest-rated television shows, is using data as a means to effect meaningful change". Persistent Systems used The Big Data Solution to help Satyamev Jayate keep track of how its viewership is interacting with the show. The show aggregated viewers responses from Social Media (Facebook, Twitter and YouTube), Satyamev Jayate Website (SpeakUp and DisQus) as well as from dedicated phone lines for the show. Variety of languages used in the responses, different formats of the responses and the need of unearthing more sentiments from the messages were challenging for the analytics team. The most responses came in Hinglish, a mix of Hindi and English. They were in either text, audio or video format. The analytics were beyond just Praise and Criticism. Persistent Systems had more than 50 tags available for every episode for message analysis. Star India Network's Chief Marketing Officer Gayatri Yadav commented "What social media did was far beyond anything producers anticipated. It took the show and made it the people’s show" about the response from social networking sites.
The issues discussed on the show garnered national attention, with several being discussed in parliament and influencing politicians and lawmakers to take action. After the first episode, for example, Rajasthan Chief Minister, Ashok Gehlot, urged public representatives and non-governmental organisations to take actions to stop the illegal practice of female foeticide. Khan met Gehlot over the issue, and Gehlot accepted the request to set up fast track court to deal the case of the sting operation featured on the show. Following the second episode, the helpline for children received an increased number of calls from across the country, reporting child abuse. The legislation to protect children below 18 years from sexual abuse became a reality with the Lok Sabha passing the bill. After exposing medical malpractice in another episode, Aamir Khan became the first non-MP to be invited to the Indian parliament, where he and his creative team presented research on the subject and discussed core issues related to the medical fraternity.
The show has also been well received overseas, including many parts of Asia as well as some parts of Africa. The overseas market where it received the most interest was China, where it was initially watched online before being licensed to air on Chinese television in 2014, due to Aamir Khan's popularity there after the success of his film 3 Idiots (2009). His work on Satyamev Jayate, as well as similar issues raised in his films, such as Taare Zameen Par (2007), 3 Idiots, PK (2014), and Dangal (2016), has led to Khan being referred to as a "national treasure of India" or "conscience of India" by Chinese media. The show is highly regarded in China, where it is one of the highest-rated productions on popular Chinese site Douban.
Comparisons
Indian television producer-director Siddhartha Basu compared the show with The Oprah Winfrey Show stating, "I think Aamir makes for a pensive and studied Oprah. More power to him and even more power to issues he raises that affect us all. Hopefully, it will get people thinking and acting on it on a much bigger scale". The Hindustan Times and Wall Street Journal also compared the show with The Oprah Winfrey Show.
Parmita Uniyal from Hindustan Times felt the show succeeds in convincing people of the outcomes of such practices better than a government campaign on female infanticide or a television soap like Balika Vadhu. Esha Razdan from Daily Bhaskar said the show was similar to reality shows like Crime Patrol or Bhanwar that were successful in staging the emotions perfectly on small screen but were treated differently in terms of format.
Episodes
Season One
Season Two
Season Three
See also
7 RCR (TV Series)
Samvidhaan (TV Series)
Pradhanmantri (TV Series)
Dangal (film)
Secret Superstar
References
External links
Official Website
Indian television talk shows
2012 Indian television series debuts
StarPlus original programming
Indian reality television series
DD National original programming
Hindi-language television shows |
52175328 | https://en.wikipedia.org/wiki/Chuck%20Longfield | Chuck Longfield | Charles Louis Longfield (born 1956) is an American nonprofit sector expert, speaker, and philanthropist.
Background
Longfield was born on October 25, 1956, in Boston, Massachusetts. Longfield graduated from Boston Latin School in 1974 and went on to attend Harvard College, where he graduated with a Bachelor of Arts degree in applied mathematics in 1978. He would later earn a Master’s of Education degree in mathematics from Harvard University Graduate School of Education in 1989.
Career
In 1978, Longfield began work at Access International. He was the principal architect of the Access Fundraising system and was responsible for its successful installation at dozens of not-for-profit organizations. Longfield would eventually serve as Chief Operating Officer at Access International before leaving the company in 1987.
Between 1987 and 1991, Longfield took a sabbatical, earned a graduate degree from Harvard University, and began teaching mathematics to middle and high school students.
In 1991, Longfield founded Target Analysis Group in Cambridge, Massachusetts where one of his first projects was to help Public Broadcasting Stations evaluate and select a new advanced membership software for stations.
Target Analysis Group delivered data mining, predictive modeling, and unique collaborative benchmarking services to hundreds of nonprofits of all sizes. Clients include American Civil Liberties Union, American Heart Association, Doctors Without Borders, Easter Seals and Oxfam America.
In November 1991, Longfield would join with Karl Case, Robert J. Shiller, and Allan Weiss as a partner with Case Shiller Weiss, Inc, a firm whose sole purpose then was to produce home price indices designed expressly to settle financial contracts. Their work would later come to be known as the Case–Shiller index Shlller, the 2013 winner of the Nobel Memorial Prize in Economic Sciences once noted in an interview that “we brought in a fourth, a real businessman, but (Longfield) said it would be ridiculous to have four names on the company."
Longfield would go on to also found Target Software in 1993 to complement the data aspects of Target Analysis Group. Target Software became a provider of large-scale database management and sophisticated donor relationship software solutions for nonprofit fundraising. Target Software clients included high-volume direct response marketers including American Diabetes Association, Children's Cancer Research Fund, Greenpeace USA, Massachusetts Audubon Society, Special Olympics, and World Wildlife Fund.
In January 2007, Blackbaud acquired both Target Analysis Group and Target Software for an aggregate purchase price of approximately $60 million. Following the acquisition, Longfield became Blackbaud’s chief scientist where he continues a variety of research initiatives.
Foundation
Formed in 2007, The Longfield Family Foundation is focused on enhancing educational outcomes for K-12 students in the Greater Boston area. The 501(c)3 private grantmaking foundation was created by Chuck and Susie Longfield, and achieved its nonprofit status in 2008.
The foundation currently supports more than twenty Boston area schools and nonprofits with a focus on ensuring all students are given the tools to achieve their greatest potential.
Honors and awards
2012: Direct Marketing Association Max Hart Nonprofit Achievement Award
2007: FundRaising Success Magazine Lifetime Achievement Award for Contribution to the Nonprofit Sector
Patents
2016: "Systems, methods, and computer program products for data integration and data mapping", U.S. Patent #9,443,033
References
1956 births
Living people
People from Boston
Harvard Graduate School of Education alumni
American philanthropists |
58540999 | https://en.wikipedia.org/wiki/CLIP%20OS | CLIP OS | CLIP OS is a Linux-based operating system created by ANSSI, the National Cybersecurity Agency of France. The aim is to produce a hardened operating system to secure sensitive information which meets the needs of the French Administration.
History
CLIP OS has been in development since before 2008. In September 2018, ANSSI released two version of CLIP OS to the public: a stable version 4, and an in-development version 5.
System overview
CLIP OS is based on the Hardened Gentoo variant of Gentoo Linux. The developers have noted that whilst it has similar aims to Qubes OS, the environment isolation mechanism is different. Further, administrators on a CLIP OS system will not be able to access user data, unlike a Qubes-based system.
See also
Security-Enhanced Linux
Gentoo Linux
Qubes OS
References
Linux |
61161680 | https://en.wikipedia.org/wiki/E3%202020 | E3 2020 | The Electronic Entertainment Expo 2020 (E3 2020) would have been the 26th E3, during which hardware manufacturers, software developers, and publishers from the video game industry would have presented new and upcoming products. The event, organized by the Entertainment Software Association (ESA), was to take place at the Los Angeles Convention Center from June 9–11, 2020. However, due to concerns over the COVID-19 pandemic, the ESA announced it would cancel the event, marking the first time since the launch of E3 in 1995 that it was not held. In lieu of that, several publishers made plans to continue with presentations of game announcements during the planned E3 period, while others opted to use more traditional marketing throughout the year.
Format and changes
In the days prior to the event, major hardware and software vendors were to host press conferences at nearby venues, where they would introduce new hardware and games that would be on display at the exhibitor's hall during the actual event. Within the event period, attendees would have been able to view these products at the exhibitor's hall, often including playable game demos, attended special presentations offered by companies, and in some cases, had private meetings with companies on their products. The E3 period is often used by journalists from video game publications as well as social media influencers to provide initial comments on these new games. This also enables retailers to plan out what products to purchase for the remainder of the year, particularly for critical Christmas and holiday sales periods.
E3 2020 was due to continue to offer public passes to the event, though the number offered was increased to 25,000 from 15,000 from the prior E3 event.
The ESA stated that they planned to revise the format for E3 2020 to feature more interactivity for attendees as to reflect the changing audience for the show, and looking to make it a "fan, media and influencer festival". ESA stated the event would be "an exciting, high-energy show featuring new experiences, partners, exhibitor spaces, activations, and programming that will entertain new and veteran attendees alike". ESA president Stanley Pierre-Louis said they were inspired by the Keanu Reeves moment from E3 2019 as the type of event they can't plan for but thrive on, and wanted to create more opportunities for similar events in the future. Part of this would have been achieved by bringing more "celebrity gamers" to various facets of the exposition. Among ESA's creative partners had included iam8bit as creative directors. However, in early March 2020, iam8bit announced they had pulled out as creative directors for the show.
Sony Interactive Entertainment, who had presented at every E3 until E3 2019, stated they would not attend E3 in 2020 for a second year in a row, as the new vision of the show did not meet their goals, and instead they will present at a number of smaller events throughout the year. Microsoft's Xbox division had affirmed they would attend the show, where it was expected that more details of the 4th generation of the Xbox consoles, including the Xbox Series X with release planned in late 2020, would be announced.
Geoff Keighley, who had organized and hosted the E3 Coliseum, a live-streamed event over the course of E3 with interviews with developers and publishers, since E3 2017, said that he had decided not to participate this year nor be a part of E3, the first time in 25 years. Pierre-Louis stated that they had still planned to have digital programming like E3 Coliseum.
Cancellation due to the COVID-19 pandemic
In wake of the COVID-19 pandemic and the state of emergency declared by Los Angeles County in early March 2020, ESA stated then that they were assessing the situation but at the time were still planning on going ahead with the event. The ESA formally announced they had officially canceled the physical event on March 11, 2020, stating "Following increased and overwhelming concerns about the COVID-19 virus, we felt this was the best way to proceed during such an unprecedented global situation. We are very disappointed that we are unable to hold this event for our fans and supporters. But we know it's the right decision based on the information we have today." In addition to providing full refunds for participants, the ESA was looking into options for virtual presentations for exhibitors to use during the planned week as an alternative event.
On April 7, 2020, the ESA told PC Gamer that they had determined they would not be able to host a digital E3 event as the disruption caused by the pandemic made it difficult to assemble the event. Instead, the ESA would offer to manage individual partners' announcements via the E3 website.
Alternative events
Microsoft
Microsoft announced after the cancellation of E3 2020 that it would host a digital event to cover information it had planned to provide at E3, including games and details on the fourth generation of Xbox consoles it launched in 2020. Starting in May 2020, Microsoft began running monthly events to reveal new games for the Xbox Series X and other hardware details.
Among the games Microsoft revealed on its May 7, 2020 event include:
The Ascent - Neon Giant
Assassin's Creed Valhalla - Ubisoft Montreal
Bright Memory: Infinite - FYQD Studio
Call of the Sea - Out of the Blue
Chorus - Fishlabs
Dirt 5 - Codemasters
Madden NFL 21 - EA Tiburon
The Medium - Bloober Team
Scarlet Nexus - Bandai Namco Studios
Scorn - Ebb Software
Second Extinction - Systemic Reaction
Vampire: The Masquerade – Bloodlines 2 - Hardsuit Labs
Yakuza: Like a Dragon - Ryu Ga Gotoku Studio
Microsoft had a second games reveal event on July 23, 2020, focusing primarily on titles from the Xbox Game Studios. These included:
As Dusk Falls - Interior/Night
Avowed - Obsidian Entertainment
CrossfireX Campaign - Smilegate
Destiny 2: Beyond Light - Bungie
Everwild - Rare
Fable - Playground Games
Forza Motorsport - Turn 10 Studios
Grounded - Obsidian Entertainment
The Gunk - Image & Form
Halo: Infinite - 343 Industries
The Medium - Bloober Team
New Genesis: Phantasy Star Online - Sega
Ori and the Will of the Wisps - Moon Studios
Psychonauts 2 - Double Fine
The Outer Worlds: Peril On Gorgon - Obsidian Entertainment
S.T.A.L.K.E.R. 2 - GSC Game World
Senua's Saga: Hellblade II - Ninja Theory
State of Decay 3 - Undead Labs
Tell Me Why - Dontnod Entertainment
Tetris Effect Connected - Monstars
Warhammer 40,000: Darktide - Fatshark
Sony
Sony ran its major reveal of the PlayStation 5 console and numerous games in an online presentation on June 11, 2020. Among the games revealed include:
Astro's Playroom - Japan Studio ()
Bugsnax - Young Horses ()
Deathloop - Arkane Studios ()
Demon's Souls (2020 video game) remake - Bluepoint Games ()
Destruction AllStars - Lucid Games ()
Ghostwire: Tokyo - Tango Gameworks ()
Godfall - Counterplay Games ()
Goodbye Volcano High - KO_OP ()
Grand Theft Auto V and Grand Theft Auto Online - Rockstar Games ()
Gran Turismo 7 - Polyphony Digital ()
Hitman 3 - IO Interactive ()
Horizon Forbidden West - Guerrilla Games ()
Jett: The Far Shore - Superbrothers ()
Kena: Bridge of Spirits - Emberlab ()
Little Devil Inside - Neostream Interactive ()
NBA 2K21 - Visual Concepts ()
Oddworld: Soulstorm - Oddworld Inhabitants, Frima Studio ()
Pragmata - Capcom ()
Project Athia - Luminous Productions ()
Ratchet & Clank: Rift Apart - Insomniac Games ()
Resident Evil Village - Capcom ()
Returnal - Housemarque ()
Sackboy: A Big Adventure - Sumo Digital ()
Solar Ash - Heart Machine ()
Spider-Man: Miles Morales - Insomniac Games ()
Stray - Bluetwelve ()
Nintendo
Nintendo had planned for a Nintendo Direct to showcase its planned offerings for the rest of 2020 as its means for alternate E3 announcements. However, complications related to the pandemic caused the event to be cancelled. Many reports indicated they would focus on the 35th anniversary of the Super Mario series in the event, with titles such as Mario Kart Live: Home Circuit and Super Mario 3D All-Stars, which they later did in September 2020.
Electronic Arts
Electronic Arts, which has generally held its "EA Play" side event alongside E3 in a nearby Los Angeles location in the previous years but has not been part of E3 directly, instead held an "EA Play" online showcase on June 18, 2020. Among game announcements, EA stated their plan to continue to bring their games to the Steam platform for Windows (in addition to their Origin platform), including the EA Access subscription program, and to the Nintendo Switch and a larger commitment to cross-platform play for more of their titles. New or update titles presented during the presentation included Apex Legends, It Takes Two, Lost in Random, Rocket Arena, and Star Wars: Squadrons, as well as a planned new game in the Skate series.
Devolver Digital
Devolver Digital, which had already planned to run a streamed event at E3, held their showcase on July 11, 2020. The showcase continued the narrative around the company's fictional chief synergy officer Nina Struthers from previous years wrapped around the various announcements. Among the announcements included:
Carrion - Phobia Game Studio
Devolverland Expo - Flying Wild Hog
Fall Guys: Ultimate Knockout - Mediatonic
Olija - Skeleton Crew
Serious Sam 4 - Croteam
Shadow Warrior 3 - Flying Wild Hog
Ubisoft
Ubisoft ran a "Ubisoft Forward" digital event on July 12, 2020, announcing several upcoming titles, including:
Assassin's Creed: Valhalla
Far Cry 6
Hyper Scape
Watch Dogs: Legion
Limited Run Games
Limited Run Games announced an online presentation on June 8, 2020 for its upcoming games but the event was delayed and ultimately canceled due to the George Floyd protests.
IGN Summer of Gaming
The video game website IGN ran an online "Summer of Gaming" expo from June 11 to June 13, 2020 that featured announcements, gameplay trailers and interviews with developers. Among the new games revealed or featured during this expo were:
13 Sentinels: Aegis Rim - Vanillaware
Alex Kidd in Miracle World DX - Jankenteam
Beyond Blue - E-Line Media
Blankos Block Party - Third Kind Games
Blue Fire - Robi Studios
Borderlands 3 - Gearbox Software
Bravery Network Online - Gloam Collective
CastleStorm 2 - Zen Studios
Chivalry 2 - Torn Banner Studios
CRSED: F.O.A.D. - Darkflow Software
Demon Turf - Fabraz
Dual Universe - Novaquark
Everspace 2 - Rockfish Games
Foreclosed - Antab Studios
GTFO - 10 Chambers Collective
Guilty Gear Strive - Arc System Works
Hardspace: Shipbreaker - Blackbird Studios
The Iron Oath - Curious Panda Games
Lucifer Within Us - Kitfox Games
Metal: Hellsinger - The Outsiders
Mortal Shell - Playstack
Nickelodeon Kart Racers 2: Grand Prix - GameMill Entertainment
Observer: System Redux - Bloober Team
Pathfinder: Kingmaker - Owlcat Games
Phantasy Star Online 2 - Sega
Ranch Simulator - Toxic Dog
The Riftbreaker - Exor Studios
Rustler - Jutsu Games
Samurai Jack: Battle Through Time - Soleil
Second Extinction - Systemic Reaction
Skater XL - East Day Studios
Spellbreak - Proletariat, Inc.
Star Renegades - Massive Damage
Stronghold: Warlords - Firefly Studios
Total War: Troy - Creative Assembly
Unto the End - 2 Ton Studios
Voidtrain - Neagra
Wasteland 3 - inXile Entertainment
The Waylanders - Gato Studio
Warhammer 40,000: Mechanicus - Bulwark Studios
Werewolf: The Apocalypse – Earthblood - Cyanide (
XIII Remake - Microids
Yakuza: Like a Dragon - Ryu Ga Gotoku Studio
Guerrilla Collective Live / PC Gaming Show / Future Games Show
Several independent and larger publishers presented a series of announcement streams between June 13 and June 15, hosted by Greg Miller, as part of the "Guerrilla Collective" in lieu of E3. Among those participating include Rebellion Developments, Raw Fury, Paradox Interactive, Larian Studios, Funcom, Versus Evil, ZA/UM, Coffee Stain Studios, 11 Bit Studios, and Humble Publishing.
The June 13 Guerrilla Collective presentation partnered with PC Gamer PC Gaming Show and GamesRadar Future Games Show to also run their showcases the same day. Among the presentations in the PC Gaming Show included Epic Games Store, Frontier Developments, Intel, Perfect World Entertainment, and Tripwire Interactive.
The following games were announced or covered during the three shows:
30XX - Batterystaple Games
A Juggler's Tale - Kaleidoscube
Aeolis Tournament - Beyond Fun Studio
Airborne Kingdom - The Wandering Band
Alaloth - Champions of The Four KingdomsGamera Interactive
Almighty: Kill Your Gods - Runwild Entertainment
The Almost Gone - Happy Volcano
Ambition: A Minuet of Power - Joy Manufacturing Co.
Among Trees - FJRD Interactive
Anno: Mutationem - Beijing ThinkingStars Technology Development
ArcheAge - XL Games
Baldur's Gate III - Larian Studios
Blankos: Block Party - Third Kind Games
Blightbound - Romino Games
Boyfriend Dungeon - Kitfox Games
The Cabbage Effect - Ninja Garage
Calico - CatBean Games
Call of the Sea - Out of the Blue
The Captain is Dead - Thunderbox Entertainment
Cardaclysm - Headup Games
Carrion - Phobia Game Studio
Cartel Tycoon - Moon Moose
Carto - Sunhead Games
Children of Morta - Dead Mage
Cloudpunk - Ion Lands
Colt Canyon - Retrific
Coreupt - Rogue Co
Cris Tales - Dreams Uncorporated
Crusader Kings 3 - Paradox Development Studio
Cyanide & Happiness – Freakpocalypse - Serenity Forge
Cygni: All Guns Blazing - KeelWorks
Dead Static Drive - Fanclub
Disco Elysium - ZA/UM
Disintegration - V1 Interactive
Divinity: Original Sin 2 - Larian Studios
Doggone - Raconteur Games
dont_forget_me - The Moon Pirates
Doors of Insanity - OneShark
Drake Hollow - The Molasses Flood
Dreamscaper - Afterburner Studios
The Dungeon Of Naheulbeuk: The Amulet Of Chaos - Artefacts Studio
Dustborn - Red Thread Games
Dwarf Fortress - Bay 12 Games/Kitfox Games
Dwarfheim - Pineleaf Studios
Edo No Yami - roglobytes Games
El Hijo: A Wild West Tale - Honig Studios
Eldest Souls - Fallen Flag Studios
Elite Dangerous - Frontier Developments
Empire of Sin - Romero Games
Escape from Tarkov - Battlestate Games
The Eternal Cylinder - ACE Team
Evan's Remains - Whitehorn Digital
Everspace 2 - Rockfish Games
Evil Genius 2 - Rebellion Developments
Exo One - Exbleative
Fae Tactics - Endlessfluff Games
The Falconeer - Tomas Sela
Fall Guys: Ultimate Knockout - Mediatonic
Fights in Tight Spaces - Ground Shatter
Floppy Knights - Rose City Games
The Forgotten City - Modern Storyteller
Frostpunk - 11 Bit Studios
Genesis Noir - Feral Cat Den
Gestalt: Steam and Cinder - Metamorphosis Games
Get to the Orange Door - Headup Games
Ghostrunner - One More Level
Gloomwood - New Blood Interactive
Godfall - Counterpoint Games
Gonner2 - Art in Heart
Gori: Cuddly Carnage - Angry Demon Studio
Hammerting - Team17
Story of Seasons: Friends of Mineral Town - Marvelous Interactive
Haven - The Game Bakers
Hotshot Racing - Lucky Mountain Games
Humankind - Amplitude Studios
Hundred Days - Broken Arms Games
Icarus - RocketWerks
Ikenfell - Chevy Ray
In Sound Mind - We Create Stuff
Inkulinati - Yaza Games
Jay and Silent Bob Chronic Blunt Punch - Interabang Entertainment
Just Die Already - DoubleMoose
Kena: Bridge of Spirits - Emberlab
Lake - Gamious
The Last Campfire - Hello Games
Last Oasis - Donkey Crew
Later Daters - Bloom Digital Media
Liberated - Atomic Wolf
Lord Winklebottom Investigates - Cave Monsters
Lost at Sea - Studio Fizbin
Mafia: Definitive Edition - Hanger 13
Maid of Sker - Wales Interactive
Main Assembly - Bad Yolk Games
Metal: Hellsinger - The Outsiders
Midnight Ghost Hunt - Vaulted Sky Games
Minute of Islands - Studio Fizbin
Morbid: The Seven Acolytes - Still Running
Mortal Shell - Cold Symmetry
Neon Abyss - Team17
New World - Amazon Game Studios Orange County
Night Call - Monkey Moon
No Place for Bravery - Glitch Factory
No Straight Roads - Sold-Out Software
Nuts - Noodlecake
One Step from Eden - Thomas Moon Kang
Ooblets - Glumberland
Operation Tango - Clever Plays
Outbuddies DX - Headup Games
The Outlast Trials - Red Barrels
Outriders - People Can Fly
Paradise Killer - Fellow Traveller
Paradise Killer - Kaizen Game Works
Paradise Lost - PolyAmorous
Per Aspera - Tlön Industries
Persona 4 Golden - Atlus
Popup Dungeon - Triple.B.Titles
Potionomics - Voracious Games
Princess Farmer - Samboee Games
Prison Architect - Double Eleven
Prodeus - Bounding Box Software
Project Wingman - Sector D2
Pull Stay - Nito Souji
Pushy and Pully in Blockland - Resistance Studio
Quantum Error - Big Panther Media
Raji: An Ancient Epic - Nodding Heads Games
Read Only Memories: Neurodiver - Midboss
Red Sails - Red Sails Team
Remnant: From the Ashes - Gunfire Games
Remothered: Broken Porcelain - Stormind Games
Rigid Force: Redux - Headup Games
Ring of Pain - Simon Boxer
Rogue Company - First Watch Games
Röki - Polygon Treehouse
ScourgeBringer - Flying Oak Games
Serial Cleaners - Draw Distance
Shadow Man Remastered - Nightdive Studios
Shadows of Doubt - ColePowered Games
Sherlock Holmes: Chapter One - Frogwares
Skate Story - Sam Eng
SkateBird - Glass Bottom Games
Skater XL - Easy Day Studios
Skeleton Crew - Cinder Cone
Slay the Spire - Megacrit
Smite - Titan Forge Games
Source of Madness - Carry Castle
Space Crew - Curve Digital
Speed Limit - Gamechuck
Spellbreak - Proletariat, Inc.
Stage Hands! - suchagamestudio
Star Renegades - Massive Damage
Summer in Mara - Chibig
Surviving the Aftermath - Haemimont Games
Suzerain - Torpor Games
Swimsanity! - Decoy Games
System Shock Remake - Nightdive Studios
Torchlight 3 - Echtra Inc.
Trash Sailors - fluckyMachine
Twin Mirror - Dontnod Entertainment
Ultrakill - New Blood Interactive
Unbound World Apart - Alien Pixel Studios
UnDungeon - Laughing Machines
Unexplored 2: The Wayfarer's Legacy - Ludomotion
Unfortunate Spacemen - New Blood Interactive
Uragun - Kool2Play
Valheim - Iron Gate AB
Vampire: The Masquerade – Bloodlines 2 - Hardsuit Labs
Vigil: The Longest Night - Glass Heart Games
Waking - tinyBuild
Wasteland 3 - InXile Entertainment
Wave Break - Funktronic Labs
Weird West - WolfEye Studios
Welcome to Elk - Triple Topping
Werewolf: The Apocalypse — Heart of the Forest - Different Tales
West of Dead - Upstream Arcade
Windjammers 2 - Dotemu
Wolfstride - OTA IMON Studios
Summer Game Fest
Games journalist Geoff Keighley arranged with numerous developers to run a four-month Summer Game Fest from May to August 2020, helping developers and publisher to host live streams and other events in lieu of the cancellation of E3 and Gamescom. Alongside the Summer Game Fest, Keighley promoted the third Steam Game Festival, following after The Game Awards 2019 and from the previously canceled 2020 Game Developers Conference, which ran from June 16 through June 22, 2020. Over 900 games had demos available on Steam for players to try, alongside a slate of interviews with developers throughout the period. A similar event for Xbox One games occurred from July 21 to July 27, 2020 as part of the Summer Game Fest.
Among games and other announcements made during the Summer Game Fest include:
Tony Hawk's Pro Skater 1 + 2, a remastered version of Tony Hawk's Pro Skater and its sequel for modern systems.
Unreal Engine 5, the next iteration of Epic Games' game engine to be released in mid-2021.
Star Wars: Squadrons, a new game from Motive Studios and Electronic Arts featuring team-play combat using the spacecraft of the Star Wars universe like X-wing fighters and TIE fighters.
Crash Bandicoot 4: It's About Time, a sequel to the original trilogy of Crash Bandicoot games on the original PlayStation console, being developed by Toys for Bob and Activision.
Cuphead releasing for the PlayStation 4.
New Game+ Expo
An online video game presentation that was organized by Suda51 and Sean Chiplock that showcased many upcoming games for the remainder of 2020 and early 2021.
The games that were announced during the presentation were:
13 Sentinels: Aegis Rim - ATLUS
Billion Road - BANDAI NAMCO Entertainment Inc.
Bloodstained: Curse of the Moon 2 - INTI CREATES CO., LTD.
Bright Memory: Infinite - FYQD Studio
Café Enchanté - Idea Factory
Cat Girl Without Salad: Amuse-Bouche - WayForward
Catherine: Full Body - ATLUS
Collar X Malice - Idea Factory
Collar X Malice Unlimited - Idea Factory
Cosmic Defenders - Fiery Squirrel
Danganronpa: Trigger Happy Havoc Anniversary Edition - Toydea Inc., Spike Chunsoft
Death end re;Quest 2 - Idea Factory, Compile Heart
Evolutis - Poke Life Studio
Fairy Tail - GUST
Fallen Legion: Revenants - YummyYummyTummy, Inc.
Fight Crab - Nussoft
Giraffe and Annika - atelier mimina
Guilty Gear Strive - Arc System Works Co., Ltd.
Idol Manager - GlitchPitch
Legends of Ethernal - Lucid Dreams Studio
Mad Rat Dead - Nippon Ichi Software
Mighty Switch Force! Collection - WayForward
NEOGEO Pocket Color Selection - Code Mystics
Neptunia Virtual Stars - Idea Factory, Compile Heart
Piofiore: Fated Memories - Idea Factory
Pretty Princess Party - Nippon Columbia
Prinny 1•2: Exploded and Reloaded - Nippon Ichi Software
Re:ZERO - Starting Life in Another World: The Prophecy of the Throne - Chime Corporation
Robotics;Notes Double Pack - MAGES. Inc.
Samurai Shodown Season Pass 2 - SNK CORPORATION
Samurai Shodown NEOGEO Collection - Digital Eclipse
Shiren the Wanderer: The Tower of Fortune and the Dice of Fate - Spike Chunsoft
Tasogare ni Nemuru Machi - Orbital Express
Tin & Kuna - Black River Studios
The Legend of Heroes: Trails of Cold Steel III - Nihon Falcom
The Legend of Heroes: Trails of Cold Steel IV - Nihon Falcom
Vitamin Connection - WayForward
void tRrLM(); //Void Terrarium - Nippon Ichi Software
Volta-X - GungHo America
Ys IX: Monstrum Nox - Nihon Falcom
References
2020 in Los Angeles
2020 in video gaming
Cancelled events in the United States
2020
Events cancelled due to the COVID-19 pandemic
Impact of the COVID-19 pandemic on the video game industry
Impact of the COVID-19 pandemic in the United States |
38764449 | https://en.wikipedia.org/wiki/Lumia%20imaging%20apps | Lumia imaging apps | Lumia imaging apps are imaging applications by Microsoft Mobile and formerly by Nokia for Lumia devices built on the technology of Scalado (except for Lumia Panorama which was developed earlier by Nokia originally for Symbian and MeeGo devices). The Lumia imaging applications were notably all branded with "Nokia" in front of their names, but after Microsoft acquired Nokia's devices and services business the Nokia branding was superseded with "Lumia", and often updates included nothing but name changes, but for the Lumia Camera this included a new wide range of feature additions. Most of the imaging applications are developed by the Microsoft Lund division. As part of the release of Windows 10 Mobile and the integration of Lumia imaging features into the Windows Camera and Microsoft Photos applications some of these applications stopped working in October 2015.
Lumia Camera
Lumia Camera (previously Nokia Camera) originally created as a merger of the Nokia Pro Cam and Nokia Smart Cam as the new Nokia Camera application that was introduced with the Lumia Black update. The Nokia Camera as introduced in 2013 offers 3 basic modes Camera Mode, Smart Mode and Video Mode, in Smart Mode users can take multiple shots at once in what's called a burst and allows for the removal of objects within the frames, change faces and create "sequence shots". The Camera Mode features a Night and a Sports mode, and Video Mode offers video recording. Due to the high end specs of the Nokia Lumia 1020 and the Nokia Lumia 1520 they received Raw DNG support.
With Lumia Cyan Nokia improved the low-light performances of the Nokia Camera, added continuous autofocus, added support for living images, and has improved the performance of the application. With Lumia Denim Microsoft rebranded the application to the Lumia Camera and launched version 5.0 which included a lot of new features, but was limited to high end Lumia devices with PureView technology only, alongside the launch of the Lumia Camera the Lumia Beta Apps' Nokia Camera Beta app was rebranded as the Lumia Camera Classic for the Nokia Lumia 830, Nokia Lumia 930, Nokia Lumia ICON, and Nokia Lumia 1520.
The Lumia Camera 5.0 included new feature additions such as faster shooting capabilities, 4K-quality video recording which captures 24 frames per second and each individual frame consists of an 8.3 megapixel shot and Rich Capture that adds Auto HDR and Dynamic Flash. Due to the nature of these features they haven't been made available for cheaper and older Lumia devices and are exclusive to newer handsets.
In August 2015 it was reported that the Lumia Camera was no longer an exclusive application to Nokia and Microsoft Lumia devices and could be downloaded on other Windows Phone devices, but would lack several features such as High Dynamic Range (HDR), Dynamic Exposure, and Dynamic Flash which were more bound to Lumia hardware than software.
The ownership of the Lumia Camera UI has been transferred to HMD Global, and it was introduced as part of Nokia Pro Camera in February 2018.
Lumia Cinemagraph
Lumia Cinemagraph (earlier Nokia Cinemagraph) is an application and Windows Phone camera lens formerly by Nokia and now by Microsoft, which is bundled with its Lumia Windows Phone smartphones. Nokia Cinemagraph was originally based on technology from the Nokia acquisition of Scalado. The software enables the creation of subtle animated GIFs (or cinemagraphs) from images, which can contain stationary and moving components in the same picture. Despite saving files as GIFs, they're exported as regular JPG files and shared Lumia cinemagraphs can not be viewed on other devices.
It was renamed to Lumia cinemagraph after the acquisition of Nokia's Devices and Services units by Microsoft in 2014.
The Lumia Beta Apps division launched the Lumia Cinemagraph Beta which migrated content from Nokia's website to Microsoft OneDrive and subsequently implemented this feature in Lumia Cinemagraph. Previously cinemagraphs used to be synchronized via the Nokia Memories site.
Lumia Creative Studio
Lumia Creative Studio (previously Nokia Creative Studio) is an imaging editing application that lets users edit photographs and merge them into panoramas, apply after effects with photo filters such as sketching, night vision, dreamy, cartoon, colour and silkscreen. The Nokia Creative Studio was later updated to let users edit and create new photographs directly using the built-in camera, and added cropping, adjusting, and Instagram-like effects, additional features include face warps and live styles these filters let users edit the viewfinder in near real-time. Images created using the Lumia Creative Studio can be shared on various social networks like Twitter, Flickr, and the Facebook. Nokia Creative Studio 6 introduced non-destructive editing which allows users to work and rework images an unlimited number of times, more features added in version 6 include rotation, straightening, and changing the aspect ratio.
Lumia Panorama
Lumia Panorama (previously Nokia Panorama) was a Lumia app that originated on Symbian and MeeGo, it allows users to make panoramas in both portrait and landscape mode and allows users to directly share them on the Facebook and Twitter. In September 2015 Microsoft announced that they would no longer release updates or support for Lumia Panorama.
Lumia Play To
Lumia Play To (previously Nokia Play To and Play To) is a DLNA-based application, it allows users to share media across devices. It was originally debuted on Symbian handsets and was introduced in the Nokia Beta Labs in 2012 for Lumia handsets.
Lumia Refocus
Lumia Refocus (previously Nokia Refocus) was revealed at the Nokia World event in Abu Dhabi in 2013 exclusive to Lumia devices with Nokia PureView capabilities, it featured the ability to alter the focus of pictures after you've already taken them, when launched it started up in camera mode and is able to do analyses between the scenes and takes between two and eight photos that allows the user to refocus it afterwards, furthermore it has social network sharing built in. In September 2015 Microsoft announced that they would discontinue the online service related to Lumia Refocus on October 30, 2015 and will remove the application from the Windows Phone Store.
Lumia Selfie
Lumia Selfie (previously Nokia Glam Me) is a "selfie"-based application that can use both the front and the main camera, it offers facial recognition (but has trouble recognising faces wearing eyeglasses) and can edit "selfies" by giving the user a bigger smile, whiter teeth, enlarge their eyes, change colour, and make the picture brighter or less bright. In August 2015 Microsoft added support for selfie sticks.
Lumia Share
Lumia Share (previously Nokia Share) is a photo synchronisation application for Windows and Windows Phone designed to connect Nokia Lumia smartphone to the Nokia Lumia 2520, it shares photos through the Lumia Storyteller app and needs to connect over WiFi, it works exclusively with the Nokia Lumia 2520 and doesn't work with any other Microsoft Windows device.
Lumia Storyteller
Lumia Storyteller (previously Nokia Storyteller) was a scrapbooking app Windows and Windows Phone that displays photos and videos taken on a Lumia device in the manner of a story, it integrates with Here maps to show where every individual photograph and video clip has been taken, and if its recorded on the Nokia Lumia 1520 it allows audiophiles that can capture directional stereo samples. In September 2015 Microsoft announced that they would discontinue Lumia Storyteller and its associated online service on October 30, 2015 as some of its features would be implemented in the Windows 10 Photos app.
Lumia Video Trimmer
Lumia Video Trimmer (previously Nokia Video Trimmer) is a video editing application originally launched by Nokia, it allows users to edit and share videos recorded on their Lumia devices.
Movie Creator Beta
Movie Creator Beta (previously Nokia Video Director though the Nokia Video Director application is still published in the Windows Phone Store as a separate client) is a video creation application launched by Microsoft Mobile in 2014 for Windows and Windows Phone, it allows users to merge photographs, video clips, music samples and text into movies. It features unlimited video length (though it's limited by a total of 25 content slots), pan-and-zoom, and offers various filters and themes, some of the themes and filters include DreamWorks' Kung-Fu Panda and Madagascar movies. It offers compatibility with content created on other devices, though it should first be moved to the device on which Movie Creator is installed before these can be edited.
In April 2015 Movie Creator Beta got Microsoft OneDrive and 4K video support.
Microsoft Photos Add-ins
Microsoft Photos Add-ins (previously called Lumia Moments) is a Lumia Denim enabled feature that can take frames from videos and turn them into individual pictures, due to the processing power it requires, only the Nokia Lumia ICON, Nokia Lumia 1520, Nokia Lumia 830, and Nokia Lumia 930 devices can run the software as the Lumia Imaging SDK integrates with newer Qualcomm Snapdragon processors. Lumia Moments has 2 types of images that it can create, one is the Best Frame which saves a photo as a "living image" and can be viewed in motion from Windows Phone's camera roll or Lumia Storyteller, and the other is Action Shots that lets users add strobe effects and blur the pictures.
With Windows 10 Mobile Microsoft decided to quietly rebranded the Lumia Moments application, the removal of the Lumia branding happened at the same time as the discontinuation of several Lumia imaging applications.
PhotoBeamer
PhotoBeamer (previously Nokia PhotoBeamer) was an imaging application originally created by Scalado and ported to Windows Phone after the Nokia acquisition, it lets users an image from their Lumia device on any display as long as it's connected to the internet, when beamed the application will ask the user to go to the PhotoBeamer application on the other internet-capable device (including other mobile telephones, tablets, notebooks, and smart televisions). the PhotoBeamer site will show a QR Code that asks the user to scan it with Bing Vision and download the application from the Windows Phone Store. In September 2015 Microsoft announced that they would discontinue the PhotoBeamer application and its associated web page as they would integrate some of its features in Windows 10.
Smart Shoot
Smart Shoot (previously Nokia Smart Shoot) is a camera application which lets users capture selected parts of a photo by using a lens for the camera app that takes a number of photos in quick succession then it allows users to select "the best" moments from each individual image, Smart Shoot can select faces from various succeeding images so if one person blinks during a single shot and doesn't blink during the next and let the user choose the faces they prefer for the picture. Smart Shoot can be launched through the application picker or via the Microsoft Camera's lenses, when attempting to take a shot the app will enquire the user to hold the phone steady, if the picture is correctly taken users may remove objects and people from the photograph.
Video Upload
Video Uploader (previously Nokia Video Upload) was a YouTube-based video uploader that let Lumia users upload their videos to Google's YouTube, initially it was only launched for the Nokia Lumia 1020 but got expanded to additional devices. In September 2015 Microsoft announced that they would no longer update nor support the Video Upload application, but that people who had previously downloaded the application could still use it as they always had but that it would immediately be removed from the Windows Phone Store.
Video Tuner
Video Tuner (previously Nokia Video Tuner) is a video editing application that enables trimming, slowing down footage, brightening the video's colours, adding music to frames, applying filters, rotating, cropping and sharing directly shared to Instagram. The software integrates with the Lumia Imaging SDK that enables developers to create similar editing software in their own applications.
See also
Microsoft Lumia
Lumia Beta Apps
PureView
Google Camera
Microsoft mobile services
References
External links
Cinemagraph on your Nokia Lumia 920 & Lumia 820
Lumia Refocus
Microsoft's PhotoBeamer
Lumia Storyteller
Windows Phone software
Nokia services |
17763039 | https://en.wikipedia.org/wiki/Twelve%20Tricks | Twelve Tricks | Twelve Tricks is a Trojan horse that first appeared around 1990.
Purdue University issued a bulletin about the Trojan on March 8, 1990. The Trojan came in an altered utility file called CORETEST.COM, which was intended to test performance of hard drives. It affected IBM platform computers running MS-DOS or PC DOS. The Trojan alters the master boot record (partition sector) and, at every reboot, it installs one of twelve "tricks" that causes issues with hardware or operation of the computer. The trick vanishes when the power is cut off, and any of the twelve tricks may appear or reappear on the next reboot. In addition, on each boot the Trojan uses a random number generator to determine whether to do a low-level format of the active copy of the boot sector and the first copy of the FAT; there is a 1/4096 chance of this happening. If the format does not happen, the Trojan randomly changes one random word in any of the first sixteen sectors of the FAT, leading to a gradual corruption of the file system.
References
External links
Morality and Machines 214.
Trojan horses |
497552 | https://en.wikipedia.org/wiki/James%20Rennell | James Rennell | Major James Rennell, FRS FRSE FRGS (3 December 1742 – 29 March 1830) was an English geographer, historian and a pioneer of oceanography. Rennell produced some of the first accurate maps of Bengal at one inch to five miles as well as accurate outlines of India and served as Surveyor General of Bengal. Rennell has been called the Father of Oceanography. In 1830 he was one of the founders of the Royal Geographical Society in London.
Early life
Rennell was born at Upcot near Chudleigh in Devon. His father, John Rennell, an officer in the Royal Artillery, was killed in action in the Low Countries in July 1747 during the War of the Austrian Succession. His mother Anne subsequently married Mr Elliott, a widower with children of his own and unable to care for additional ones, leading to Rennell being brought up by a guardian, the Rev. Gilbert Burrington, vicar of Chudleigh. The ancient paternal Devonshire family name was formerly spelt Reynell and was of French origin.
Rennell entered the Royal navy as a midshipman at the age of fourteen under Captain Hyde Parker, who was in the process of commissioning the frigate (launched in October 1757) at the beginning of the Seven Years' War. He was present at the Raid on Cherbourg (1758), and at the disastrous Battle of Saint Cast in the same year. In 1760, he went out to East India, and served in under Captain Hyde Parker during the three following years, when he saw some active service, including a cutting-out expedition at Pondicherry. He soon mastered the theory and practice of marine surveying, and, on account of his proficiency in this regard, Parker lent his services to the East India Company. He served for a year on board one of the company's ships bound to the Philippines, with the object "of establishing new branches of trade with the natives of the intervening places". Rennell accompanied the hydrographer Alexander Dalrymple and drew several charts and plans of harbours on journeys in the schooner Cuddalore (1759–62), the London (1762-63) and the Neptune (1763–64).
In 1763, at the end of the Seven Years' War, seeing no chance of promotion, Rennell entered the service of the East India Company's sea service. He at once received command of a vessel of two hundred tons; but it was destroyed by a hurricane in Madras Roads in March 1763. Fortunately, his captain was on shore, and he was at once appointed to command a small yacht called the Neptune, in which he executed surveys of what is now called the Palk Strait and the Pamban Channel. Rennell became a good friend of the Governor of Madras, Robert Palk, who also came from Devon. His next cruise was to Bengal, and he arrived at Calcutta at a time when Governor Vansittart was anxious to initiate a survey of the British territory. Owing to the friendship of an old messmate Mr Topham, who had become the governor's secretary, he was chosen for survey duties and was initially commissioned as an ensign in the Bengal Engineers, dated 9 April 1764.
Survey work in India
Rennell initially surveyed the Ganges river starting in the autumn of 1764, encountering in 1766 the mountains that he called the Tartarian mountains (the Himalayas). The main purpose of the survey was to find a navigable waterway from Calcutta to the northern regions. In the same year Captain Rennell nearly lost his life when the party of surveyors were attacked by sanyasis on the Bhutan border. Injured badly in the shoulder blade, he was taken by boat to Dacca, a journey that took six days. He was treated by Dr Francis Russell and was fortunate to recover although he never gained full function of his right arm. On return Rennell's work was much appreciated and he was recommended for a handsome salary of £1000 a year. Rennell immediately sent money home to his mother and sister while also showing gratitude by sending gifts to Rev. Burrington's children. In 1767 Lord Robert Clive, the then Governor of Bengal and Bihar, appointed him as surveyor-general of the East India Company's dominions in Bengal. After this Rennell was always accompanied by a company of Sepoys.
The headquarters of the surveyor-general were at Dacca, and in the successive working seasons he gradually completed his difficult, laborious, and dangerous task. Mapping was accomplished on the field by army men with distances measured using a perambulator - a calibrated wheel whose revolutions were counted to calculate distance - and a compass. Economic pressures further forced a reduction in pay and there was dissatisfaction among the native ranks on the question of double batta (part payment). The surveys were sometimes dangerous, a party of his men were attacked by a leopard and Rennell had to use his bayonet to save his men. Malaria was common and in 1771 he was briefly diverted to military duties in an expedition against a band of raiders. The mapping project was originally a general survey of newly acquired lands, but the job soon gained a wider scope under Warren Hastings, who was appointed as Governor-General in 1773. One of Hastings' first projects was to begin a Domesday style reckoning of property, land, people, and culture for taxation of revenue. As for Rennell's part in this, his project was carried out much like a military survey, searching for safe passage through territory, with information gathering a secondary object. Rennell received the rank of major of Bengal engineers on 5 April 1776, and retired from active service in 1777. The government of Warren Hastings granted him a pension of £600 per annum, which the East India Company somewhat tardily confirmed.
The remaining fifty-three years of his life were spent in London, and were devoted to geographical research chiefly among the materials in the East India House. He took up his residence in Suffolk Street, near Portland Place, where his house became a place of meeting for travellers from all parts of the world.
Achievements
Rennell's first and most influential work was the Bengal Atlas (1779) which was followed by the first detailed map of India (1783), the Geographical System of Herodotus (1800), the Comparative Geography of Western Asia (1831), and important studies on the geography of northern Africa—apparent in introductions to the Travels of Mungo Park and Hornemann. He however forged the geographic data of Park by introducing a mountain range, the Mountains of Kong, supposedly located in the western part of Africa. The fake was intended to support his own theory on the course of the Niger River. The Mountains of Kong remained present in maps until the early 20th century. He also contributed papers to Archaeologia on the site of Babylon, the island of St Paul's shipwreck, and the landing-place of Caesar in Britain. Rennell published a book titled Memoir of a map of Hindoostan (1788) and dedicated to Sir Joseph Banks.
Beside his geographical and historical works, James Rennell is known today for his hydrographical works on the currents in the Atlantic and Indian oceans. He started his research on these topics when he was travelling by a sailing ship with his family from India to Britain after his retirement in 1777. During the extraordinary prolonged voyage around the Cape of Good Hope he mapped "the banks and currents at the Lagullas" and published in 1778 the work on what is today called the Agulhas Current. This was one of the first contributions to the science of Oceanography. He was the first to explain the causes of the occasional northern current found to the south of the Isles of Scilly, which has since been called as Rennell's Current.
After the death of his wife in 1810, he returned to the oceanographic topics. His numerous naval friends gave him a mass of data from their logs, which he assimilated into a chart of all currents in the Atlantic Ocean. During his last years, he wrote his last work Currents of the Atlantic Ocean, published posthumously by his daughter Jane in 1832.
He was elected a fellow of the Royal Society in 1781; and he received the Copley Medal of the Royal Society in 1791, and the gold medal of the Royal Society of Literature in 1825. James Rennell has been called the Father of Indian Geography, and for his pioneering work on oceanography as the Father of Oceanography.
In later life Rennell suffered from gout and in 1829 he fell from a chair and broke his thigh. He died on 29 March 1830 at his home on Suffolk Street. He was interred in the nave of Westminster Abbey, and there is a tablet to his memory, with a bronze bust by Ludwig Hagbold, near the western door. The year of his death saw the foundation of the Royal Geographical Society. His collection of books were gifted by his heirs to the Royal Geographical Society.
In 1851, botanist Pieter Willem Korthals published Rennellia, a genus of flowering plants from Indo-China and western Malesia, belonging to the family Rubiaceae and it was named in James Rennell's honour.
Personal life
Rennell was "of middle height, well proportioned, with a grave yet sweet expression of countenance. He was diffident and unassuming, but ever ready to impart information. His conversation was interesting, and he had a remarkable flow of spirits. In all his discussions he was candid and ingenuous". Rennell was however irrational in proposing that the Niger ended in a lake without reaching the sea. He was also strongly opposed to the methods of William Lambton in his proposed trigonometrical survey. His opposition had to be neutralized by Sir Nevil Maskelyne before Lambton's plan was approved.
While at Dacca, Rennell became a close friend of John Cartier. It was in Cartier's home that he met Jane Thackeray (d. 1810), daughter of Dr Thomas Thackeray, headmaster of Harrow, and a great-aunt of the novelist William Makepeace Thackeray and he married her in 1772. Their older son Thomas (born 1779) died unmarried in 1846, while the second son William (born 1781) worked in the Bengal civil service and died in 1819 without leaving any children. One daughter named Jane (born 1773) died young, whilst another daughter, also Jane, born in 1777 on St Helena where he had stopped on his way to England married Admiral Sir John Tremayne Rodd, KCB in 1809. Lady Rodd devoted several years publishing her father's current charts and revising new editions of his principal works. She died in December 1863.
References
Attribution
Further reading
Biographical notices of officers of the Royal (Bengal) engineers
External links
The journals of Major James Rennell ... 1764 to 1767. Print edition, 1910. Wikimedia commons, Internet Archive
1742 births
1830 deaths
Royal Navy officers
Fellows of the Royal Society
Recipients of the Copley Medal
English geographers
English historians
People from Teignbridge (district)
Royal Navy personnel of the Seven Years' War
Burials at Westminster Abbey
History of the Corps of Engineers (Indian Army)
Bengal Engineers officers
English oceanographers
18th-century English people
19th-century English people
English surveyors |
12791085 | https://en.wikipedia.org/wiki/ISO/IEC%2027004 | ISO/IEC 27004 | ISO/IEC 27004 Information Technology – Security techniques – Information Security Management – Measurement. It is part of a family of standards of information security management system (ISMS) , which is a systematic approach to securing sensitive information, of ISO/IEC. It provides standards for a robust approach to managing information security (infosec) and building resilience. It was published on December 7, 2009 and revised in December 2016. It is currently not certifiable and is not translated into Spanish.
This standard appears in ISO/IEC 27000-series (more information can be found in ISO/IEC 27000). The ISO/IEC 27004 standard provides guidelines intended to assist organizations to evaluate the performance of information security and the efficiency of a management system in order to meet the requirements of the ISO/IEC 27001.
What does the standard establish?
This standard establishes:
Monitoring and measuring of information security performance.
Monitoring and measuring the effectiveness of an Information Security Management System (ISMS), including processes and controls.
Analysis and evaluating of monitoring and measurement results.
This standard is applicable to all types of organizations regardless of size.
Terms and structure
The terms and definitions given in this standard are defined within the standard ISO/IEC 27000. The ISO/IEC 27004 standard is structured as follows:
Logic Base
Characteristics - this section defines, among other things, what to monitor, who and what to measure, when to monitor, measure and evaluate it.
Types of measures - this section describes the two main types of measures: performance and effectiveness.
Processes - this section defines the types of processes to follow.
In addition to that, it has 3 annexes (A, B, C):
Annex A - describes an information security measurement model which includes the relationship of the components of the measurement model and the requirements of ISO/IEC 27001.
Annex B - provides a wide range of examples that are used as a guide.
Annex C - provides a more complete example.
References
External links
ISO Website
Information assurance standards
27004 |
458499 | https://en.wikipedia.org/wiki/Knowledge%20engineering | Knowledge engineering | Knowledge engineering (KE) refers to all technical, scientific and social aspects involved in building, maintaining and using knowledge-based systems.
Background
Expert systems
One of the first examples of an expert system was MYCIN, an application to perform medical diagnosis. In the MYCIN example, the domain experts were medical doctors and the knowledge represented was their expertise in diagnosis.
Expert systems were first developed in artificial intelligence laboratories as an attempt to understand complex human decision making. Based on positive results from these initial prototypes, the technology was adopted by the US business community (and later worldwide) in the 1980s. The Stanford heuristic programming projects led by Edward Feigenbaum was one of the leaders in defining and developing the first expert systems.
History
In the earliest days of expert systems there was little or no formal process for the creation of the software. Researchers just sat down with domain experts and started programming, often developing the required tools (e.g. inference engines) at the same time as the applications themselves. As expert systems moved from academic prototypes to deployed business systems it was realized that a methodology was required to bring predictability and control to the process of building the software. There were essentially two approaches that were attempted:
Use conventional software development methodologies
Develop special methodologies tuned to the requirements of building expert systems
Many of the early expert systems were developed by large consulting and system integration firms such as Andersen Consulting. These firms already had well tested conventional waterfall methodologies (e.g. Method/1 for Andersen) that they trained all their staff in and that were virtually always used to develop software for their clients. One trend in early expert systems development was to simply apply these waterfall methods to expert systems development.
Another issue with using conventional methods to develop expert systems was that due to the unprecedented nature of expert systems they were one of the first applications to adopt rapid application development methods that feature iteration and prototyping as well as or instead of detailed analysis and design. In the 1980s few conventional software methods supported this type of approach.
The final issue with using conventional methods to develop expert systems was the need for knowledge acquisition. Knowledge acquisition refers to the process of gathering expert knowledge and capturing it in the form of rules and ontologies. Knowledge acquisition has special requirements beyond the conventional specification process used to capture most business requirements.
These issues led to the second approach to knowledge engineering: development of custom methodologies specifically designed to build expert systems. One of the first and most popular of such methodologies custom designed for expert systems was the Knowledge Acquisition and Documentation Structuring (KADS) methodology developed in Europe. KADS had great success in Europe and was also used in the United States.
See also
Knowledge level modeling
Knowledge management
Knowledge representation and reasoning
Knowledge retrieval
Knowledge tagging
Method engineering
References
External links
Data & Knowledge Engineering – Elsevier Journal
Knowledge Engineering Review, Cambridge Journal
The International Journal of Software Engineering and Knowledge Engineering – World Scientific
IEEE Transactions on Knowledge and Data Engineering
Expert Systems: The Journal of Knowledge Engineering – Wiley-Blackwell
Semantic Web
Ontology (information science) |
9852982 | https://en.wikipedia.org/wiki/Syndie | Syndie | Syndie is an open-source cross-platform computer application to syndicate (re-publish) data (mainly forums) over a variety of anonymous and non-anonymous computer networks.
Syndie is capable of reaching archives situated in the following anonymous networks: I2P, Tor, Freenet.
History
Syndie has been in development since 2003 and ties in closely with the I2P network project, which is considered a parent project.
Following the departure of lead developer Jrandom in 2007, work on syndie was paused, however as of 2013 active development has once again resumed.
Concept
Syndie operates in a manner similar to blogs, newsgroups, forums, and other content tools; it allows one or more authors to privately or publicly post messages. Messages are pushed and pulled to and from archive servers (other peers that choose to be), which are hosted in a variety of anonymous and non-anonymous locations.
Most archive servers are HTTP archives hosted inside the I2P network, but there are syndie archives in Freenet as well as the normal internet. Each archive does not control the content stored on it; by default all messages are pushed and pulled by all participants. In this way, every message is backed up by every user, so should one archive go down, the content can be pushed to a different archive then pulled by other users of that archive. This means that even if all of the users and archives delete a message, as long as one person has it and there is one pushable archive, the message will be redistributed to every user.
Users have the option to delete locally stored messages after a set time, after the local storage consumes a certain amount of disk space, or by blacklists of users.
Each user can create multiple identities. Each identity is known as a forum, and users can post into their own or different forums. Each user can control their forum; for example, they may wish to run a blog by not permitting other people to start threads, but allowing them to post comments.
Technical requirements
Syndie is a Java application and as such can run on any platform on which Java is supported; although a standard widget toolkit is required for the graphical user interface versions.
Syndie is primarily a graphical application, based on the Standard Widget Toolkit for Java, but it can be run in a CLI (headless) mode.
See also
Distributed computing, Distributed Networking, Distributed database
I2P - The development of Syndie and I2P currently overlap.
Anonymous P2P
Osiris (Serverless Portal System) - Support P2P web forum.
Internet forum
References
External links
www.syndie.i2p inside the I2P network
Syndie web forum at I2P forums
Syndie at infoAnarchy.org (web site about infoanarchism)
Cross-platform free software
Free Internet forum software
Free routing software
Free software programmed in Java (programming language)
Cryptographic software
Anonymity networks
Internet privacy
Free network-related software
I2P |
5542334 | https://en.wikipedia.org/wiki/Stephen%20Crow | Stephen Crow | Stephen Crow (also known as Stephen J. Crow, Steve Crow, and Steve J. Crow) is a game programmer who worked in the 1980s on the ZX Spectrum platform, programming for companies such as Hewson Consultants and Bubble Bus Software. He also worked with members of the Graftgold team. More recently, he was the lead artist working for the now-defunct Monkeytropolis.
Recognition
Crow was elected "Best Programmer Of The Year" in 1986 by the readers of CRASH. He was also voted best programmer of the year at the 1985 Golden Joystick Awards.
Games
ZX Spectrum
Laser Snaker (1983), Poppy Soft
Factory Breakout (1984), Poppy Soft
Wizard's Lair (1985), Bubble Bus Software
Starquake (1985), Bubble Bus Software
Firelord (1986), Hewson
Uridium (1986), Hewson
Uridium+ (1987), Hewson
Zynaps (1987), Hewson
Eliminator (1988), Hewson
Heavy Metal (1990), US Gold
Commodore 64
Marauder (1988), Hewson
Turbo Outrun (1989), U.S. Gold
Savage (1989), Firebird Software
Mr. Heli (1989), Firebird Software
Golden Axe (1990), Virgin Games
Chase H.Q. II: Special Criminal Investigation (1990), Ocean Software
Other platforms
Global Gladiators (1993), Virgin Games
Disney's Aladdin (1993), Sega Enterprises
Cool Spot (1993), Virgin Games
Walt Disney's The Jungle Book (1994), Virgin Interactive Entertainment
Earthworm Jim (1994), Playmates Interactive
Earthworm Jim Special Edition (1995), Interplay Entertainment
Earthworm Jim 2 (1995), Playmates Interactive
Skullmonkeys (1998), Electronic Arts
BoomBots (1999), SouthPeak Games
Metal Arms: Glitch in the System (2003), Sierra Entertainment, Vivendi Universal Games
World of Warcraft: The Burning Crusade (2007), Blizzard Entertainment
References
Official WoS softgraphy
Living people
Video game programmers
Year of birth missing (living people) |
4971502 | https://en.wikipedia.org/wiki/Ethernet%20over%20SDH | Ethernet over SDH | Ethernet Over SDH (EoS or EoSDH) or Ethernet over SONET refers to a set of protocols which allow Ethernet traffic to be carried over synchronous digital hierarchy networks in an efficient and flexible way. The same functions are available using SONET.
Ethernet frames which are to be sent on the SDH link are sent through an "encapsulation" block (typically Generic Framing Procedure or GFP) to create a synchronous stream of data from the asynchronous Ethernet packets. The synchronous stream of encapsulated data is then passed through a mapping block which typically uses virtual concatenation (VCAT) to route the stream of bits over one or more SDH paths. As this is byte interleaved, it provides a better level of security compared to other mechanisms for Ethernet transport.
After traversing SDH paths, the traffic is processed in the reverse fashion: virtual concatenation path processing to recreate the original synchronous byte stream, followed by decapsulation to converting the synchronous data stream to an asynchronous stream of Ethernet frames.
The SDH paths may be VC-4, VC-3, VC-12 or VC-11 paths. Up to 64 VC-11 or VC-12 paths can be concatenated together to form a single larger virtually concatenated group. Up to 256 VC-3 or VC-4 paths can be concatenated together to form a single larger virtually concatenated group. The paths within a group are referred to as "members". A virtually concatenated group is typically referred to by the notation -v, where is VC-4, VC-3, VC-12 or VC-11 and X is the number of members in the group.
A 10-Mbit/s Ethernet link is often transported over a VC-12-5v which allows the full bandwidth to be carried for all packet sizes.
A 100-Mbit/s Ethernet link is often transported over a VC-3-2v which allows the full bandwidth to be carried when smaller packets are used (< 250 bytes) and Ethernet flow control restricts the rate of traffic for larger packets. But does only give ca. 97Mbit/s, not full 100Mb.
A 1000-Mbit/s (or 1 GigE) Ethernet link is often transported over a VC-3-21v or a VC-4-7v which allows the full bandwidth to be carried for all packets.
EoS also drops the "idle" packets of the Ethernet frame before encapsulating the Ethernet frame to GFP, which is recreated at the other end during decapsulation process. Hence this provide a better throughput compared to native Ethernet transport.
An additional protocol, called link capacity adjustment scheme (LCAS), allows the two endpoints of the SDH paths to negotiate which paths are working and can carry traffic versus which paths should not be used to carry traffic.
See also
Packet over SONET
SDH
Synchronous optical networking
Network protocols |
33513826 | https://en.wikipedia.org/wiki/Digital%20performance | Digital performance | Digital Performance is a very wide category filled with a range of productions. It is performance that incorporates and integrates computer technologies and techniques. Performers can incorporate multimedia into any type of production whether it is live on a theatre stage or in the street. Anything as small as video recordings or a visual image classifies the production as multimedia. When the key role in a performance is the technologies, it is considered a digital performance. This can be as little as projections on a screen in front of a live audience to creating and devising a performance in an online environment to using animation and sensing software’s.
Introduction
The integration of technology can increase the effects, spectacles and impact of performances and visual arts. Incorporating multimedia in productions surprises the audience and keeps them engaged. The larger social impact after the performance includes interpretations from different cliques of people.
The Digital Performance Archive stores a physical and catalogued archives from the 20th century. Its research project involved the increase and creative use of computer technology, techniques within live theatre and dance productions and cyberspace interactive dramas and webcasts. Looking at a broad range of diverse productions would have enhanced their research project as they saw how each type of performance was affected and even how new types of performance came about due to the involvement of computer technologies and techniques and all other multimedia sources. In the 20th century, emerging forms of drama and genres of performance reflected the active and escalating role of computer technology in society, with businesses and education treating computers as essential. As our society became more reliant on computers in everyday life, artists began relying on computer technology to play a significant role in film as well as live theatre.
CD-ROMs, video games and installations raised the level of interactive potential of computers. Their accessibility and sophistication added those very aspects to performances that used advanced technology.
From the first digital performance to present day, artists have followed each other's works and explored trends. Technology has its own trends and upgrades, which appeal to artists in the field. Online environments are a strong base for theoretical trends as the frameworks are similar and sometimes the same.
The largest platform for theatre in the world is the World Wide Web. The Internet is seen as a satisfying place for relaxation and offers everyone who uses the Internet their ‘fifteen megabytes of fame.' Every single person who uses online networking sites, blogs, chat rooms, MOOs and IRC is creating their own performance with the use of e-friendships. Therefore it is not just the artists who deliberately devise theatrical events via the use of computer techniques and technologies. However, the focus of this article is on artistic performance.
The field of Digital Performance generates a wide variety of visual and stylistic aspects. Computers allow the artist/s or companies to create extraordinary and unique experiences, such as virtual realities.
Many consider the relationship between technology and art stimulating, thought-provoking and complementary.‘Performance is undertaking a shift in the conception of technology’. As the educational curriculum also upgrades with the latest technologies, students who are 'digital natives' make use of technology and find it effective in connecting with a broad cross-section of audiences.
Performers and devisors of digital performance productions have to approach the aspects of technology in diverse ways to be able to reach different meanings, content, drama, the impact of visuals and the audience-performer relationship. The computer is seen as the middle man, who fixes lasting problems rather than creating original and new performance processes and hype, in digital performance.
History
The term digital performance can usually be defined to consist of all types of performance in which computer technologies have taken on the main role rather than an auxiliary one in the content, techniques, aesthetics or the delivery forms. Digital performance will usually explore the representations of the subliminal, dreams, and fantasy worlds.
Over the past decade or so we have witnessed a very vast and incredible development within every aspect of the technological world. And as a result of this, there has been a large increase in the experimentation with computer technologies being integrated within the performing arts; and with the new technological creations and the developments of existing digital technologies they are also beginning to create a greater, more significant impact on the way in which different art forms are being practiced. Digital media now has a new and much more dramatic role to play in live theatre, dance and performance; and while digital media performances are now beginning to proliferate there are now many new forms of interactive performance genres that have emerged. These new genres are most commonly in the style of audience participatory installation pieces that can take place either on the internet or can be played on a CD – ROM. “Computers are arenas for social experience and dramatic interaction, a type of media more like public theatre, and their output is used for qualitative interaction, dialogue and conversation.” The computer is vastly becoming a significant tool and agent of performative action and creation. Computer technologies can be contextualised as being of a social, cultural and artistic change. Computers do now permit for the artistic modes of expression and for the new generic forms of networking and interactive performance. Theatre itself has always been right at the cutting edge of technology and it has been quick on the mark to be able to recognise, and to take full advantage of the dramatic and aesthetic potentials that these new and existing technologies have to offer. Theatre, dance and performance art have always been known to be a form of multimedia; and right at the very core of the theatre through all the manifestations right to the contemporary experimentation, as well as incorporating all of the visual elements into a production; at the foreground to any piece there is the human voice and the spoken text.
Locating the roots of digital performance practices can be traced back for many decades or even centuries. There are three main periods that can be highlighted in the history of multi - media performance; Futurism during the 1910s, mixed media performance during the 1960s and experimentation with a performance and the computer during the 1990s. Both during the eras of Futurism and the experimentation with computer incorporated within performance, they are both greatly inspired by the development of new and existing technology. Researching back, digital performance practices have experimented with numerous of the different avant – garde movements which can date right back from the early twentieth century, some of these movements can include the likes of Bauhaus, Dada, Surrealism and many more. It could be said that digital performance can be linked to the aesthetics, philosophies and practices of the futurist movement. One of the main links that has been found which can connect futurism to digital performance was with the use of the ‘machine’ which was used in the set of Robert Lepage’s Zulu Time (1999). Although, it can be said that the avant – garde movement Futurism does have a more philosophical basis for contemporary digital performance, more so than any of the other avant – garde movements such as the likes of Bauhaus, Dada and Surrealism which are mainly used to provide inspiration for a large part of the content and styles for the artistic expression. Looking back at the early twentieth century avant – garde, there were many works that would use pre – digital technologies.
One of the earliest examples of when theatre and film were being integrated together in a performance, with the use of digital technology was to try to challenge the distinction between what is ‘liveness’ as in the live performer on the stage and the media imagery, it is about the relationship between the virtual and the actual performance being the dialogical interactivity. With the increased usage of digital media in a performance, not all performances are necessarily live; without the body's physical presence it could be argued whether this is this still live theatre or just a set of media images and footage. Theatre pieces which have integrated digital media and computer generated projections built into performances have a long historical lineage that stretches back to over a hundred years ago to an experiment that was conducted by Loie Fuller. Fuller was the first modern dance choreographer to try out and to use new technology within her performance work. In 1911, Fuller, a dancer, conducted an experiment where she used film footage and projected it onto diaphanous robes. In the performance, as Fuller danced the robes in which she was wearing became a sort of ‘screen’ where multi – coloured lights were projected upon it. This was one of the first pieces of theatre where film footage was integrated into becoming a part of a live theatre performance.
From the early 1960s, computer-generated imagery had then began to emerge as a distinctive art form, and in John Whitney's film Catalog (1961) viewers witnessed one of films’ first ever uses of computer transformations. Although digital arts had been developing since the 1960s, in the 1990s computer technologies had become much more accessible to artists which then led to a large increase in the digital performance activity. It was during this time that computer hardware was built to become much more “user – friendly” and we then witnessed the invention of the digital camera and the home PC's (Personal Computer) and the establishment of the World Wide Web. It was during this period of time which would then go on to be known as being the ‘Digital Revolution’. During the period of the ‘Digital Revolution’ there was a great influence on the aesthetics, creation and culture of the performance arts; this had a significant effect on the process for film and television production to the creative writing and the visual and performance arts.
From 1970 there was a period of time of when theatrical experimentation elevated the visual over the verbal; there was a proliferation in the use of media projections in theatre, dance and performance art, using both screens and video monitors. With the ease of using these video technologies more and more artists began to play around with the possibilities of integrating visual media within their live performance work. The use of media technology including the likes of film, video and sound equipment became some of the main characteristics for the experimental theatre, and some of the most noted performance artists of this time were creating work by incorporating video and film footage into their theatre productions. By the 1990s multimedia and computer technologies had widely become a part of everyday life. In live multimedia theatre: the projection screens or video monitor frame additional space, in two dimensions. Media screens are able to provide a uniquely flexible space, unlike the fixed point of view in which a traditional theatre provides to the members of the audience. Moving towards the end of the twentieth century digital computer technologies have become increasingly more ubiquitous.
As digital media has now become more and more popular over the years, the perception of digital images and videos now are now lacking in legitimacy as these technologies have gradually intensified over the past few years. Digital performance is an expansion of a constantly progressing history of which embracing the adaptation of these new and already existing technologies, in which to amplify a performance and the visual art aesthetic effect, and to create the sense of a spectacle as well as capturing the emotional and the sensorial impact.
Production examples
Video conferencing is also a part of digital performance, some theatre companies (such as ‘The Gertrude Stein Repertory Theatre’ and ‘Kunstwerk-Blend’) introduced this into their productions to bring different performers from different locations together live on stage to create a lively new brand of digital performance. The Internet is used as a base for these productions with text-based online environments such as MUDs and MOOs as well as the use of webcams and webcasts. They all created new forms to come under the live and interactive performance area. With the ongoing increase of internet users and with more and more people are made accessible to these software's therefore more and more artists are experimenting and devising performances using computer technologies which them branches out and raises the number of digital performances and productions.
Another case of what is under the title of digital performance is dance productions, using software and computer techniques such as advanced animation and motion capturing. Productions can project images of virtual dancers made on to the stage. Software that is highly involved with digital performance is custom-made motion sensing; this software can be used to control images, avatars, sounds and lighting live on stage.
Communication through the Internet has been classed as a type of digital performance as it has been theorized as a type of virtual performance of the self. As this is so, people have already stated digital performance as being everywhere, which has led to digital performance being modernized itself. It has been noted as incorporating the elements of electronics in everyday life through the communicational and production elements.
Some strands of performing have always been seen as multimedia forms such as theatre, dance and performance art. Dance involves an intimate relationship with music for a start. All three of the above are involved with visual aspects such as sets, props, lighting and costume which are all a part of the production to enhance the body/bodies in a space. Using such aspects makes the performance become multimedia and that is becoming more advanced with the new techniques that computers are upgrading to therefore making digital performance a strong and popular area to study.
With the effects on the ongoing Covid 19 pandemic, artists have been experimenting with online/ digital theatre, for example The Coronalogues (2021) featuring works from directors across the world Ken Kwek and Dick Lee.
References
Internet art
Digital art
Theatre |
30149926 | https://en.wikipedia.org/wiki/JBoss%20operations%20network | JBoss operations network | JBoss Operations Network (or JBoss ON or JON) is free software/open-source Java EE-based network management software. JBoss Operations Network is part of the JBoss Enterprise Middleware portfolio of software. JBoss ON is an administration and management platform for the development, testing, deployment, and monitoring of the application lifecycle. Because it is Java-based, the JBoss application server operates cross-platform: usable on any operating system that supports Java. JBoss ON was developed by JBoss, now a division of Red Hat.
Product features
JBoss ON provides performance, configuration, and inventory management in order to deploy, manage, and monitor the JBoss middleware portfolio, applications, and services.
JBoss ON provides management of the following:
Discovery and inventory
Configuration management
Application deployment
Perform and schedule actions on servers, applications and services
Availability management
Performance management
Provisioning
JBoss ON is the downstream of RHQ (see also section Associated Acronyms).
Licensing & Pricing
The various JBoss application platforms are open source, but Red Hat charges to provide a support subscription for JBoss Enterprise Middleware.
Associated acronyms
Acronyms associated with JBoss ON:
RHQ - upstream open source project of JBoss ON. Current stable version is RHQ 4.13; main difference between RHQ 4 and RHQ 3 is the transition of the UI framework to Google Web Toolkit.
Jopr - previously the JBossAS management bits (upstream) of JBoss ON - now integrated into the RHQ source base (since September 2009). Jopr used to use RHQ as its upstream. There will be no more separate Jopr releases.
JON - JBoss Operations Network (ON)
See also
List of JBoss software
Network monitoring system
Comparison of network monitoring systems
HyPerformix IPS Performance Optimizer
IBM Tivoli Framework
References
Bibliography
External links
JBoss application server website
Securing JBoss
JBoss Wiki
JBoss Community Projects
JBoss Introduction by Javid Jamae
Java enterprise platform
Red Hat software
Cross-platform software
Network management |
69965051 | https://en.wikipedia.org/wiki/Tobi%20Bruce | Tobi Bruce | Tobi Bruce (born 1965) has been the Director of Exhibitions and Collections and Senior Curator at the Art Gallery of Hamilton since 2015. She is a Canadian art historian who places curatorial collaboration at the centre of her practice.
Career
Bruce was born and grew up in Montreal. She received her Honours BA in Art History from Queen's University, Kingston (1989) and her MA in Canadian Art History from Carleton University, Ottawa (1999). From 1991 to 1992, she served as Acting Registrar of the Agnes Etherington Art Centre, Queen's University, then went to the Art Gallery of Hamilton (AGH), as Registrar in 1993. She was appointed Curator of Historical Art in 1999 and in 2015, Director of Exhibitions and Collections.
She has focused on Canadian historical art in exhibitions ranging from biography with her show of the Canadian artist Harriet Ford (2001) to her exhibition and publication on the history of the Art Gallery of Hamilton's historical Canadian collection Lasting Impressions: celebrated works from the Art Gallery of Hamilton (2005) which she co-authored to her co-curated exhibition William Kurelek: The Messenger (2011) which was the first retrospective of his art in a quarter century and the largest ever mounted. In the same year, she also co-curated the The French Connection: Canadians at the Paris Salons, 1880–1900.
In 2014, after three years of research and two trips to Sweden and France, she curated her major retrospective and co-authored the book Into the Light: The Paintings of William Blair Bruce (1859–1906), to examine the artist from different viewpoints (indigenous included) to achieve diversity. In 2015, she co-curated The Artist Herself: Self-Portraits by historical Canadian women artists (2015) with Alicia Boutilier of the Agnes Etherington Art Centre. This exhibition expanded the genre's definition by using not only the human face but other art forms to explore self-representation. Her 2021 exhibition Tom Thomson: The Art of Authentication co-curated and authored again with Boutilier, established criteria to authenticate a work of art, taking as its focus the work of Tom Thomson and exhibiting possible Thomsons and known fakes to illustrate the help authentication can provide. The exhibition was called a "rewarding experience" as an examination of authentication and forgery in art, using Tom Thomson as case in point. The show was rated one of the best exhibitions of the season by the Art Institute of Canada because it gave insight into the extensive problem-solving that museum professionals undertake in tracing authenticity.
Bruce has contributed a chapter to Canada and Impressionism: New Horizons at the National Gallery of Canada (2019) and also has written entries to such exhibitions as Embracing Canada: Landscapes from Krieghoff to the Group of Seven, Vancouver Art Gallery (2015) and Uninvited, Canadian Women Artists in the Modern Movement, McMichael Canadian Art Collection (2020). Her lectures on Canadian art include one on "Canadian Art-Making and Making Art Exhibitions" to a panel of Archivists, Librarians and Curators at the Art Libraries Society of North America 40th Annual Conference, Toronto (2012). She has been an Adjunct Lecturer in Canadian Art History at McMaster University, Hamilton, since 2016 and has served on the Board of Trustees, Association of Art Museum Curators (2017–2020; 2020–2022).
References
1965 births
Living people
Canadian non-fiction writers
Canadian art historians
Canadian women non-fiction writers
Queen's University alumni
Women art historians
Canadian women historians
Canadian art curators
People from Montreal |
2370794 | https://en.wikipedia.org/wiki/LAN%20gaming%20center | LAN gaming center | A LAN Gaming Center is a business where one
can use a computer connected over a LAN to
other computers, primarily for the purpose of playing multiplayer
computer games. Use of these computers or game consoles costs a fee, usually per hour or minute; sometimes one can have unmetered access with a pass for a day or month, etc. It may or may not serve as a
regular café as well, with food and drinks being served. Many game
centers have evolved in recent years to also include console gaming
(Xbox, GameCube, PlayStation 2). Other
centers offer computer repair and consulting, custom
built computers (White box computer), web design, programming
classes or summer camps, and many other technology related
services. Centers are starting to offer PS3s, Wiis and
Xbox 360s that are playable in store.
LAN gaming centers can come in various sizes and styles, from the
very small (6-8 computers) to the very large (400+ computers). Most
have computer systems with higher-end hardware built specifically for
computer gaming. Customers can play games with (or against) in-house
opponents and most also include a high-speed Internet
connection to allow customers to play games with online opponents as
well (usually at the same time). Most also host a number of special
events such as tournaments and LAN parties, some
lasting throughout the night. Another typical feature is the ability to browse the Web and use instant messaging clients. Often these gaming centers allow customers the option of
renting out the whole or part of the store for private LAN parties. LAN
centers are typically decorated in such a way as to enhance the already
present gaming atmosphere, such as adding black-light lightbulbs and
gaming paraphernalia and posters around the center. A standard LAN
gaming center will have rows of computers next to each other with
highback leather computer chairs.
There are over 650 LAN centers in the US, while 90% of the LAN Centers
in the world are in China, the largest having over 1777
seats.
It is common for a LAN gaming centers to sell the games that they had already installed for their in-house computers, most notably MMORPGs and many FPS games.
Campus gaming centers
The first LAN Gaming center located on a college campus was Savage
Geckos which was opened by Bruce McCulloch Jones as a tenant of
Eastern Michigan University's Student Center, both opening on
November 6, 2006. The combination
retail/gaming center included 21 networked Xboxs, other consoles: PS2s,
PS3s, Wiis, 10 networked gaming PCs and theatre seating (with cup
holders) for game play, LCD screens, video projectors and a
retail/arcade/hang out area. This center hosted some of the first
on-campus intercollegiate play with a Halo 3 tournament between
students from Eastern Michigan University, University of Michigan,
Michigan State University and Oakland University.
The operation lasted until Spring of 2008 when it was purchased by the
university. Mr. Jones made a series of presentations to the Association of College
Unions International promoting the use of video games for positive
social interaction on campus student centers. Now there are over 20 universities with some form of LAN Center on campus including Eastern Michigan University, University of Michigan, Oakland University,
Illinois State University, and Illinois Institute of Technology.
See also
PC bang
Internet café
Public computer
References
External links
Video game culture
LAN parties |
57165746 | https://en.wikipedia.org/wiki/Azure%20Sphere | Azure Sphere | Azure Sphere are services and products from Microsoft that allows vendors of Internet of Things devices to increase security by combining a specific system on a chip, Azure Sphere OS and an Azure-based cloud environment for continuous monitoring.
Azure Sphere OS
The Azure Sphere OS is a custom Linux-based microcontroller operating system created by Microsoft to run on an Azure Sphere-certified chip and to connect to the Azure Sphere Security Service. The Azure Sphere OS provides a platform for Internet of Things application development, including both high-level applications and real-time capable applications. It is the first operating system running a Linux kernel that Microsoft has publicly released and the second Unix-like operating system that the company has developed for external (public) users, the other being Xenix.
Azure Sphere Security Service
The Azure Sphere Security Service, sometimes referred to as AS3, is a cloud-based service that enables maintenance, updates, and control for Azure Sphere-certified chips. The Azure Sphere Security Service establishes a secure connection between a device and the internet and/or cloud services and ensures secure boot. The primary purpose of contact between an Azure Sphere device and the Azure Sphere Security Service is to authenticate the device identity, ensure the integrity and trust of the system software, and to certify that the device is running a trusted code base. The service also provides the secure channel used by Microsoft to automatically download and install Azure Sphere OS updates and customer application updates to deployed devices.
Azure Sphere chips and hardware
Azure Sphere-certified chips and hardware support two general implementation categories: greenfield and brownfield. Greenfield implementation involves designing and building new IoT devices with an Azure Sphere-certified chip. Azure Sphere-certified chips are currently produced by MediaTek. In June 2019, NXP announced plans to produce a line of Azure Sphere-certified chips. In October 2019, Qualcomm announced plans to produce the first Azure Sphere-certified chips with cellular capabilities. Brownfield implementation involves the use of an Azure Sphere guardian device to securely connect an existing device to the internet. Azure Sphere guardian modules are currently produced by Avnet.
MediaTek 3620
MT3620 is the first Azure Sphere-certified chip and includes an ARM Cortex-A7 processor (500 MHz), two ARM Cortex-M4F I/O subsystems (200 MHz), 5x UART/I2C/SPI, 2x I2S, 8x ADC, up to 12 PWM counters and up to 72x GPIO, and Wi-Fi capability. MT3620 contains the Microsoft Pluton security subsystem with a dedicated ARM Cortext-M4F core that handles secure boot and secure system operation.
Azure Sphere Hardware
Azure Sphere-certified chips can be purchased in several different hardware configurations produced by Microsoft partners.
Modules
Avnet Wi-Fi Module
AI-Link Wi-Fi Module
USI Dual Band Wi-Fi Module
Development kits
Avnet MT3620 Starter Kit
Seeed MT3620 Dev Board
Seeed MT3620 Mini Dev Board
Guardian devices
Avnet Guardian Module
Azure Sphere Guardian module
An Azure Sphere Guardian module is external, add-on hardware that incorporates an Azure Sphere-certified chip and can be used to securely connect an existing device to the internet. In addition to an Azure-Sphere certified chip, an Azure Sphere Guardian module includes the Azure Sphere OS and the Azure Sphere Security Service. A guardian module is a method of implementing secure connectivity for existing devices without exposing those devices to the internet. The guardian module can be connected to a device through an existing peripheral on the device and is then connected to the internet through Wi-Fi or Ethernet. The device itself is not connected directly to the network.
Microsoft Pluton
Pluton is a Microsoft-designed security subsystem that implements a hardware-based root of trust for Azure Sphere. It includes a security processor core, cryptographic engines, a hardware random number generator, public/private key generation, asymmetric and symmetric encryption, support for elliptic curve digital signature algorithm (ECDSA) verification for secured boot, and measured boot in silicon to support remote attestation with a cloud service, and various tampering counter-measures.
Application development
The Linux-based Azure Sphere OS provides a platform for developers to write applications that use peripherals on the Azure Sphere chip. Applications can run on either the A7 core with access to external communications or as real-time capable apps on one of the M4 processors. Real-time capable applications can run on either bare metal or with a real-time operating system (RTOS). Developer applications can be distributed to Azure Sphere devices through the same secure mechanism as the Azure Sphere OS updates.
Timeline
The following is a list of announcements and releases from Microsoft around Azure Sphere.
See also
Windows Subsystem for Linux
Xenix
Windows IoT
References
External links
2018 software
ARM operating systems
Computer-related introductions in 2018
Computing platforms
Embedded operating systems
Linux
Microcontroller software
Microsoft hardware
Microsoft operating systems |
432897 | https://en.wikipedia.org/wiki/Mechanical%20calculator | Mechanical calculator | A mechanical calculator, or calculating machine, is a mechanical device used to perform the basic operations of arithmetic automatically, or (historically) a simulation such as an analog computer or a slide rule. Most mechanical calculators were comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator and the digital computer.
Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest of the modern attempts at mechanizing calculation. His machine was composed of two sets of technologies: first an abacus made of Napier's bones, to simplify multiplications and divisions first described six years earlier in 1617, and for the mechanical part, it had a dialed pedometer to perform additions and subtractions. A study of the surviving notes shows a machine that would have jammed after a few entries on the same dial, and that it could be damaged if a carry had to be propagated over a few digits (like adding 1 to 999). Schickard abandoned his project in 1624 and never mentioned it again until his death 11 years later in 1635.
Two decades after Schickard's supposedly failed attempt, in 1642, Blaise Pascal decisively solved these particular problems with his invention of the mechanical calculator. Co-opted into his father's labour as tax collector in Rouen, Pascal designed the calculator to help in the large amount of tedious arithmetic required; it was called Pascal's Calculator or Pascaline.
Thomas' arithmometer, the first commercially successful machine, was manufactured two hundred years later in 1851; it was the first mechanical calculator strong enough and reliable enough to be used daily in an office environment. For forty years the arithmometer was the only type of mechanical calculator available for sale.
The comptometer, introduced in 1887, was the first machine to use a keyboard that consisted of columns of nine keys (from 1 to 9) for each digit. The Dalton adding machine, manufactured in 1902, was the first to have a 10 key keyboard. Electric motors were used on some mechanical calculators from 1901. In 1961, a comptometer type machine, the Anita mk7 from Sumlock comptometer Ltd., became the first desktop mechanical calculator to receive an all-electronic calculator engine, creating the link in between these two industries and marking the beginning of its decline. The production of mechanical calculators came to a stop in the middle of the 1970s closing an industry that had lasted for 120 years.
Charles Babbage designed two new kinds of mechanical calculators, which were so big that they required the power of a steam engine to operate, and that were too sophisticated to be built in his lifetime. The first one was an automatic mechanical calculator, his difference engine, which could automatically compute and print mathematical tables. In 1855, Georg Scheutz became the first of a handful of designers to succeed at building a smaller and simpler model of his difference engine. The second one was a programmable mechanical calculator, his analytical engine, which Babbage started to design in 1834; "in less than two years he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1937, Howard Aiken convinced IBM to design and build the ASCC/Mark I, the first machine of its kind, based on the architecture of the analytical engine; when the machine was finished some hailed it as "Babbage's dream come true".
Ancient history
A short list of other precursors to the mechanical calculator must include a group of mechanical analog computers which, once set, are only modified by the continuous and repeated action of their actuators (crank handle, weight, wheel, water...). Before the common era, there are odometers and the Antikythera mechanism, a seemingly out of place, unique, geared astronomical clock, followed more than a millennium later by early mechanical clocks, geared astrolabes and followed in the 15th century by pedometers. These machines were all made of toothed gears linked by some sort of carry mechanisms. These machines always produce identical results for identical initial settings unlike a mechanical calculator where all the wheels are independent but are also linked together by the rules of arithmetic.
The 17th century
Overview
The 17th century marked the beginning of the history of mechanical calculators, as it saw the invention of its first machines, including Pascal's calculator, in 1642. Blaise Pascal had invented a machine which he presented as being able to perform computations that were previously thought to be only humanly possible.
The 17th century also saw the invention of some very powerful tools to aid arithmetic calculations like Napier's bones, logarithmic tables and the slide rule which, for their ease of use by scientists in multiplying and dividing, ruled over and impeded the use and development of mechanical calculators until the production release of the arithmometer in the mid 19th century.
Invention of the mechanical calculator
Blaise Pascal invented a mechanical calculator with a sophisticated carry mechanism in 1642. After three years of effort and 50 prototypes he introduced his calculator to the public. He built twenty of these machines in the following ten years. This machine could add and subtract two numbers directly and multiply and divide by repetition. Since, unlike Schickard's machine, the Pascaline dials could only rotate in one direction zeroing it after each calculation required the operator to dial in all 9s and then (method of ) propagate a carry right through the machine. This suggests that the carry mechanism would have proved itself in practice many times over. This is a testament to the quality of the Pascaline because none of the 17th and 18th century criticisms of the machine mentioned a problem with the carry mechanism and yet it was fully tested on all the machines, by their resets, all the time.
In 1672, Gottfried Leibniz started working on adding direct multiplication to what he understood was the working of Pascal's calculator. However, it is doubtful that he had ever fully seen the mechanism and the method could not have worked because of the lack of reversible rotation in the mechanism. Accordingly, he eventually designed an entirely new machine called the Stepped Reckoner; it used his Leibniz wheels, was the first two-motion calculator, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. Only the machine built in 1694 is known to exist; it was rediscovered at the end of the 19th century having been forgotten in an attic in the University of Göttingen.
Leibniz had invented his namesake wheel and the principle of a two-motion calculator, but after forty years of development he wasn't able to produce a machine that was fully operational; this makes Pascal's calculator the only working mechanical calculator in the 17th century. Leibniz was also the first person to describe a pinwheel calculator. He once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used."
Other calculating machines
Schickard, Pascal and Leibniz were inevitably inspired by the role of clockwork which was highly celebrated in the seventeenth century. However, simple-minded application of interlinked gears was insufficient for any of their purposes. Schickard introduced the use of a single toothed "mutilated gear" to enable the carry to take place. Pascal improved on that with his famous weighted sautoir. Leibniz went even further in relation to the ability to use a moveable carriage to perform multiplication more efficiently, albeit at the expense of a fully working carry mechanism.
The principle of the clock (input wheels and display wheels added to a clock like mechanism) for a direct-entry calculating machine couldn't be implemented to create a fully effective calculating machine without additional innovation with the technological capabilities of the 17th century. because their gears would jam when a carry had to be moved several places along the accumulator. The only 17th-century calculating clocks that have survived to this day do not have a machine-wide carry mechanism and therefore cannot be called fully effective mechanical calculators. A much more successful calculating clock was built by the Italian Giovanni Poleni in the 18th century and was a two-motion calculating clock (the numbers are inscribed first and then they are processed).
In 1623, Wilhelm Schickard, a German professor of Hebrew and Astronomy, designed a calculating clock which he drew on two letters that he wrote to Johannes Kepler. The first machine to be built by a professional was destroyed during its construction and Schickard abandoned his project in 1624. These drawings had appeared in various publications over the centuries, starting in 1718 with a book of Kepler's letters by Michael Hansch, but in 1957 it was presented for the first time as a long-lost mechanical calculator by Dr. Franz Hammer. The building of the first replica in the 1960s showed that Schickard's machine had an unfinished design and therefore wheels and springs were added to make it work. The use of these replicas showed that the single-tooth wheel, when used within a calculating clock, was an inadequate carry mechanism. (see Pascal versus Schickard). This did not mean that such a machine could not be used in practice, but the operator when faced with the mechanism resisting rotation, in the unusual circumstances of a carry being required beyond (say) 3 dials, would need to "help" the subsequent carry to propagate.
Around 1643, a French clockmaker from Rouen, after hearing of Pascal's work, built what he claimed to be a calculating clock of his own design. Pascal fired all his employees and stopped developing his calculator as soon as he heard of the news. It is only after being assured that his invention would be protected by a royal privilege that he restarted his activity. A careful examination of this calculating clock showed that it didn't work properly and Pascal called it an avorton (aborted fetus).
In 1659, the Italian Tito Livio Burattini built a machine with nine independent wheels, each one of these wheels was paired with a smaller carry wheel. At the end of an operation the user had to either manually add each carry to the next digit or mentally add these numbers to create the final result.
In 1666, Samuel Morland invented a machine designed to add sums of money, but it was not a true adding machine since the carry was added to a small carry wheel situated above each digit and not directly to the next digit. It was very similar to Burattini's machine. Morland created also a multiplying machines with interchangeable disks based on Napier's bones. Taken together these two machines provided a capacity similar to that of the invention of Schickard, although it is doubtful that Morland ever encountered Schickard's calculating clock.
In 1673, the French clockmaker René Grillet described in Curiositez mathématiques de l'invention du Sr Grillet, horlogeur à Paris a calculating machine that would be more compact than Pascal's calculator and reversible for subtraction. The only two Grillet machines known have no carry mechanism, displaying three lines of nine independent dials they also have nine rotating napier's rod for multiplication and division. Contrary to Grillet's claim, it was not a mechanical calculator after all.
The 18th century
Overview
The 18th century saw the first mechanical calculator that could perform a multiplication automatically; designed and built by Giovanni Poleni in 1709 and made of wood, it was the first successful calculating clock. For all the machines built in this century, division still required the operator to decide when to stop a repeated subtraction at each index, and therefore these machines were only providing a help in dividing, like an abacus. Both pinwheel calculators and Leibniz wheel calculators were built with a few unsuccessful attempts at their commercialization.
Prototypes and limited runs
In 1709, the Italian Giovanni Poleni was the first to build a calculator that could multiply automatically. It used a pinwheel design, was the first operational calculating clock and was made of wood; he destroyed it after hearing that Antonius Braun had received 10,000 Guldens for dedicating a pinwheel machine of his own design to the emperor Charles VI of Vienna.
In 1725, the French Academy of Sciences certified a calculating machine derived from Pascal's calculator designed by Lépine, a French craftsman. The machine was a bridge in between Pascal's calculator and a calculating clock. The carry transmissions were performed simultaneously, like in a calculating clock, and therefore "the machine must have jammed beyond a few simultaneous carry transmissions".
In 1727, a German, Antonius Braun, presented the first fully functional four-operation machine to Charles VI, Holy Roman Emperor in Vienna. It was cylindrical in shape and was made of steel, silver and brass; it was finely decorated and looked like a renaissance table clock. His dedication to the emperor engraved on the top of the machine also reads "..to make easy to ignorant people, addition, subtraction, multiplication and even division".
In 1730, the French Academy of Sciences certified three machines designed by Hillerin de Boistissandeau. The first one used a single-tooth carry mechanism which, according to Boistissandeau, wouldn't work properly if a carry had to be moved more than two places; the two other machines used springs that were gradually armed until they released their energy when a carry had to be moved forward. It was similar to Pascal's calculator but instead of using the energy of gravity Boistissandeau used the energy stored into the springs.
In 1770, Philipp Matthäus Hahn, a German pastor, built two circular calculating machines based on Leibniz' cylinders. J. C. Schuster, Hahn's brother in law, built a few machines of Hahn's design into the early 19th century.
In 1775, Lord Stanhope of the United Kingdom designed a pinwheel machine. It was set in a rectangular box with a handle on the side. He also designed a machine using Leibniz wheels in 1777. "In 1777 Stanhope produced the Logic Demonstrator, a machine designed to solve problems in formal logic. This device marked the beginning of a new approach to the solution of logical problems by mechanical methods."
In 1784, Johann-Helfrich Müller built a machine very similar to Hahn's machine.
The 19th century
Overview
Luigi Torchi invented the first direct multiplication machine in 1834. This was also the second key-driven machine in the world, following that of James White (1822).
The mechanical calculator industry started in 1851 Thomas de Colmar released his simplified Arithmomètre, which was the first machine that could be used daily in an office environment.
For 40 years, the arithmometer was the only mechanical calculator available for sale and was sold all over the world. By 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two licensed arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883). Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers in three years.
The 19th century also saw the designs of Charles Babbage calculating machines, first with his difference engine, started in 1822, which was the first automatic calculator since it continuously used the results of the previous operation for the next one, and second with his analytical engine, which was the first programmable calculator, using Jacquard's cards to read program and data, that he started in 1834, and which gave the blueprint of the mainframe computers built in the middle of the 20th century.
Desktop calculators produced
In 1851, Thomas de Colmar simplified his arithmometer by removing the one-digit multiplier/divider. This made it a simple adding machine, but thanks to its moving carriage used as an indexed accumulator, it still allowed for easy multiplication and division under operator control. The arithmometer was now adapted to the manufacturing capabilities of the time; Thomas could therefore manufacture consistently a sturdy and reliable machine. Manuals were printed and each machine was given a serial number. Its commercialization launched the mechanical calculator industry. Banks, insurance companies, government offices started to use the arithmometer in their day-to-day operations, slowly bringing mechanical desktop calculators into the office.
In 1878 Burkhardt, of Germany, was the first to manufacture a clone of Thomas' arithmometer. Until then Thomas de Colmar had been the only manufacturer of desktop mechanical calculators in the world and he had manufactured about 1,500 machines. Eventually twenty European companies will manufacture clones of Thomas' arithmometer until WWII.
Dorr E. Felt, in the U.S., patented the Comptometer in 1886. It was the first successful key-driven adding and calculating machine. ["Key-driven" refers to the fact that just pressing the keys causes the result to be calculated, no separate lever or crank has to be operated. Other machines are sometimes called "key-set".] In 1887, he joined with Robert Tarrant to form the Felt & Tarrant Manufacturing Company. The comptometer-type calculator was the first machine to receive an all-electronic calculator engine in 1961 (the ANITA mark VII released by Sumlock comptometer of the UK).
In 1890 W. T. Odhner got the rights to manufacture his calculator back from Königsberger & C, which had held them since it was first patented in 1878, but had not really produced anything. Odhner used his Saint Petersburg workshop to manufacture his calculator and he built and sold 500 machines in 1890. This manufacturing operation shut down definitively in 1918 with 23,000 machines produced. The Odhner Arithmometer was a redesigned version of the Arithmometer of Thomas de Colmar with a pinwheel engine, which made it cheaper to manufacture and gave it a smaller footprint while keeping the advantage of having the same user interface.
In 1892 Odhner sold the Berlin branch of his factory, which he had opened a year earlier, to Grimme, Natalis & Co. They moved the factory to Braunschweig and sold their machines under the brand name of Brunsviga (Brunsviga is the Latin name of the town of Braunschweig). This was the first of many companies which would sell and manufacture clones of Odhner's machine all over the world; eventually millions were sold well into the 1970s.
In 1892, William S. Burroughs began commercial manufacture of his printing adding calculator Burroughs Corporation became one of the leading companies in the accounting machine and computer businesses.
The "Millionaire" calculator was introduced in 1893. It allowed direct multiplication by any digit – "one turn of the crank for each figure in the multiplier". It contained a mechanical product lookup table, providing units and tens digits by differing lengths of posts. Another direct multiplier was part of the Moon-Hopkins billing machine; that company was acquired by Burroughs in the early 20th century.
Automatic mechanical calculators
In 1822, Charles Babbage presented a small cogwheel assembly that demonstrated the operation of his difference engine, a mechanical calculator which would be capable of holding and manipulating seven numbers of 31 decimal digits each. It was the first time that a calculating machine could work automatically using as input results from its previous operations. It was the first calculating machine to use a printer. The development of this machine, later called "Difference Engine No. 1," stopped around 1834.
In 1847, Babbage began work on an improved difference engine design—his "Difference Engine No. 2." None of these designs were completely built by Babbage. In 1991 the London Science Museum followed Babbage's plans to build a working Difference Engine No. 2 using the technology and materials available in the 19th century.
In 1855, Per Georg Scheutz completed a working difference engine based on Babbage's design. The machine was the size of a piano, and was demonstrated at the Exposition Universelle in Paris in 1855. It was used to create tables of logarithms.
In 1875, Martin Wiberg re-designed the Babbage/Scheutz difference engine and built a version that was the size of a sewing machine.
Programmable mechanical calculators
In 1834, Babbage started to design his analytical engine, which will become the undisputed ancestor of the modern mainframe computer with two separate input streams for data and program (a primitive Harvard architecture), printers for outputting results (three different kind), processing unit (mill), memory (store) and the first-ever set of programming instructions. In the proposal that Howard Aiken gave IBM in 1937 while requesting funding for the Harvard Mark I which became IBM's entry machine in the computer industry, we can read: "Few calculating machines have been designed strictly for application to scientific investigations, the notable exceptions being those of Charles Babbage and others who followed him. In 1812 Babbage conceived the idea of a calculating machine of a higher type than those previously constructed to be used for calculating and printing tables of mathematical functions. ....After abandoning the difference engine, Babbage devoted his energy to the design and construction of an analytical engine of far higher powers than the difference engine..."
In 1843, during the translation of a French article on the analytical engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers. This is considered the first computer program.
From 1872 until 1910, Henry Babbage worked intermittently on creating the mill, the "central processing unit" of his father's machine. After a few setbacks, he gave in 1906 a successful demonstration of the mill which printed the first 44 multiples of pi with 29 places of figures.
Cash registers
The cash register, invented by the American saloonkeeper James Ritty in 1879, addressed the old problems of disorganization and dishonesty in business transactions. It was a pure adding machine coupled with a printer, a bell and a two-sided display that showed the paying party and the store owner, if he wanted to, the amount of money exchanged for the current transaction.
The cash register was easy to use and, unlike genuine mechanical calculators, was needed and quickly adopted by a great number of businesses. "Eighty four companies sold cash registers between 1888 and 1895, only three survived for any length of time".
In 1890, 6 years after John Patterson started NCR Corporation, 20,000 machines had been sold by his company alone against a total of roughly 3,500 for all genuine calculators combined.
By 1900, NCR had built 200,000 cash registers and there were more companies manufacturing them, compared to the "Thomas/Payen" arithmometer company that had just sold around 3,300 and Burroughs had only sold 1,400 machines.
Prototypes and limited runs
In 1820, Thomas de Colmar patented the Arithmometer. It was a true four operation machine with a one digit multiplier/divider (The Millionaire calculator released 70 years later had a similar user interface). He spent the next 30 years and 300,000 Francs developing his machine. This design was replaced in 1851 by the simplified arithmometer which was only an adding machine.
From 1840, Didier Roth patented and built a few calculating machines, one of which was a direct descendant of Pascal's calculator.
In 1842, Timoleon Maurel invented the Arithmaurel, based on the Arithmometer, which could multiply two numbers by simply entering their values into the machine.
In 1845, Izrael Abraham Staffel first exhibited a machine that was able to add, subtract, divide, multiply and obtain a square root.
Around 1854, Andre-Michel Guerry invented the Ordonnateur Statistique, a cylindrical device designed to aid in summarizing the relations among data on moral variables (crime, suicide, etc.)
In 1872, Frank S. Baldwin in the U.S. invented a pinwheel calculator.
In 1877 George B. Grant of Boston, MA, began producing the Grant mechanical calculating machine capable of addition, subtraction, multiplication and division. The machine measured 13x5x7 inches and contained eighty working pieces made of brass and tempered steel. It was first introduced to the public at the 1876 Centennial Exposition in Philadelphia.
In 1883, Edmondson of the UK patented a circular stepped drum machine
1900s to 1970s
Mechanical calculators reach their zenith
Two different classes of mechanisms had become established by this time, reciprocating and rotary. The former type of mechanism was operated typically by a limited-travel hand crank; some internal detailed operations took place on the pull, and others on the release part of a complete cycle. The illustrated 1914 machine is this type; the crank is vertical, on its right side. Later on, some of these mechanisms were operated by electric motors and reduction gearing that operated a crank and connecting rod to convert rotary motion to reciprocating.
The latter type, rotary, had at least one main shaft that made one [or more] continuous revolution[s], one addition or subtraction per turn. Numerous designs, notably European calculators, had handcranks, and locks to ensure that the cranks were returned to exact positions once a turn was complete.
The first half of the 20th century saw the gradual development of the mechanical calculator mechanism.
The Dalton adding-listing machine introduced in 1902 was the first of its type to use only ten keys, and became the first of many different models of "10-key add-listers" manufactured by many companies.
In 1948 the cylindrical Curta calculator, which was compact enough to be held in one hand, was introduced after being developed by Curt Herzstark in 1938. This was an extreme development of the stepped-gear calculating mechanism. It subtracted by adding complements; between the teeth for addition were teeth for subtraction.
From the early 1900s through the 1960s, mechanical calculators dominated the desktop computing market. Major suppliers in the USA included Friden, Monroe, and SCM/Marchant. These devices were motor-driven, and had movable carriages where results of calculations were displayed by dials. Nearly all keyboards were full – each digit that could be entered had its own column of nine keys, 1..9, plus a column-clear key, permitting entry of several digits at once. (See the illustration below of a Marchant Figurematic.) One could call this parallel entry, by way of contrast with ten-key serial entry that was commonplace in mechanical adding machines, and is now universal in electronic calculators. (Nearly all Friden calculators, as well as some rotary (German) Diehls had a ten-key auxiliary keyboard for entering the multiplier when doing multiplication.) Full keyboards generally had ten columns, although some lower-cost machines had eight. Most machines made by the three companies mentioned did not print their results, although other companies, such as Olivetti, did make printing calculators.
In these machines, addition and subtraction were performed in a single operation, as on a conventional adding machine, but multiplication and division were accomplished by repeated mechanical additions and subtractions. Friden made a calculator that also provided square roots, basically by doing division, but with added mechanism that automatically incremented the number in the keyboard in a systematic fashion. The last of the mechanical calculators were likely to have short-cut multiplication, and some ten-key, serial-entry types had decimal-point keys. However, decimal-point keys required significant internal added complexity, and were offered only in the last designs to be made. Handheld mechanical calculators such as the 1948 Curta continued to be used until they were displaced by electronic calculators in the 1970s.
Typical European four-operation machines use the Odhner mechanism, or variations of it. This kind of machine included the Original Odhner, Brunsviga and several following imitators, starting from Triumphator, Thales, Walther, Facit up to Toshiba. Although most of these were operated by handcranks, there were motor-driven versions. Hamann calculators externally resembled pinwheel machines, but the setting lever positioned a cam that disengaged a drive pawl when the dial had moved far enough.
Although Dalton introduced in 1902 first 10-key printing adding (two operations, the other being subtraction) machine, these features were not present in computing (four operations) machines for many decades. Facit-T (1932) was the first 10-key computing machine sold in large numbers. Olivetti Divisumma-14 (1948) was the first computing machine with both printer and a 10-key keyboard.
Full-keyboard machines, including motor-driven ones, were also built until the 1960s. Among the major manufacturers were Mercedes-Euklid, Archimedes, and MADAS in Europe; in the USA, Friden, Marchant, and Monroe were the principal makers of rotary calculators with carriages. Reciprocating calculators (most of which were adding machines, many with integral printers) were made by Remington Rand and Burroughs, among others. All of these were key-set. Felt & Tarrant made Comptometers, as well as Victor, which were key-driven.
The basic mechanism of the Friden and Monroe was a modified Leibniz wheel (better known, perhaps informally, in the USA as a "stepped drum" or "stepped reckoner"). The Friden had an elementary reversing drive between the body of the machine and the accumulator dials, so its main shaft always rotated in the same direction. The Swiss MADAS was similar. The Monroe, however, reversed direction of its main shaft to subtract.
The earliest Marchants were pinwheel machines, but most of them were remarkably sophisticated rotary types. They ran at 1,300 addition cycles per minute if the [+] bar is held down. Others were limited to 600 cycles per minute, because their accumulator dials started and stopped for every cycle; Marchant dials moved at a steady and proportional speed for continuing cycles. Most Marchants had a row of nine keys on the extreme right, as shown in the photo of the Figurematic. These simply made the machine add for the number of cycles corresponding to the number on the key, and then shifted the carriage one place. Even nine add cycles took only a short time.
In a Marchant, near the beginning of a cycle, the accumulator dials moved downward "into the dip", away from the openings in the cover. They engaged drive gears in the body of the machine, which rotated them at speeds proportional to the digit being fed to them, with added movement (reduced 10:1) from carries created by dials to their right. At the completion of the cycle, the dials would be misaligned like the pointers in a traditional watt-hour meter. However, as they came up out of the dip, a constant-lead disc cam realigned them by way of a (limited-travel) spur-gear differential. As well, carries for lower orders were added in by another, planetary differential. (The machine shown has 39 differentials in its [20-digit] accumulator!)
In any mechanical calculator, in effect, a gear, sector, or some similar device moves the accumulator by the number of gear teeth that corresponds to the digit being added or subtracted – three teeth changes the position by a count of three. The great majority of basic calculator mechanisms move the accumulator by starting, then moving at a constant speed, and stopping. In particular, stopping is critical, because to obtain fast operation, the accumulator needs to move quickly. Variants of Geneva drives typically block overshoot (which, of course, would create wrong results).
However, two different basic mechanisms, the Mercedes-Euklid and the Marchant, move the dials at speeds corresponding to the digit being added or subtracted; a [1] moves the accumulator the slowest, and a [9], the fastest. In the Mercedes-Euklid, a long slotted lever, pivoted at one end, moves nine racks ("straight gears") endwise by distances proportional to their distance from the lever's pivot. Each rack has a drive pin that is moved by the slot. The rack for [1] is closest to the pivot, of course.
For each keyboard digit, a sliding selector gear, much like that in the Leibniz wheel, engages the rack that corresponds to the digit entered. Of course, the accumulator changes either on the forward or reverse stroke, but not both. This mechanism is notably simple and relatively easy to manufacture.
The Marchant, however, has, for every one of its ten columns of keys, a nine-ratio "preselector transmission" with its output spur gear at the top of the machine's body; that gear engages the accumulator gearing. When one tries to work out the numbers of teeth in such a transmission, a straightforward approach leads one to consider a mechanism like that in mechanical gasoline pump registers, used to indicate the total price. However, this mechanism is seriously bulky, and utterly impractical for a calculator; 90-tooth gears are likely to be found in the gas pump. Practical gears in the computing parts of a calculator cannot have 90 teeth. They would be either too big, or too delicate.
Given that nine ratios per column implies significant complexity, a Marchant contains a few hundred individual gears in all, many in its accumulator. Basically, the accumulator dial has to rotate 36 degrees (1/10 of a turn) for a [1], and 324 degrees (9/10 of a turn) for a [9], not allowing for incoming carries. At some point in the gearing, one tooth needs to pass for a [1], and nine teeth for a [9]. There is no way to develop the needed movement from a driveshaft that rotates one revolution per cycle with few gears having practical (relatively small) numbers of teeth.
The Marchant, therefore, has three driveshafts to feed the little transmissions. For one cycle, they rotate 1/2, 1/4, and 1/12 of a revolution. . The 1/2-turn shaft carries (for each column) gears with 12, 14, 16, and 18 teeth, corresponding to digits 6, 7, 8, and 9. The 1/4-turn shaft carries (also, each column) gears with 12, 16, and 20 teeth, for 3, 4, and 5. Digits [1] and [2] are handled by 12 and 24-tooth gears on the 1/12-revolution shaft. Practical design places the 12th-rev. shaft more distant, so the 1/4-turn shaft carries freely-rotating 24 and 12-tooth idler gears. For subtraction, the driveshafts reversed direction.
In the early part of the cycle, one of five pendants moves off-center to engage the appropriate drive gear for the selected digit.
Some machines had as many as 20 columns in their full keyboards. The monster in this field was the Duodecillion made by Burroughs for exhibit purposes.
For sterling currency, £/s/d (and even farthings), there were variations of the basic mechanisms, in particular with different numbers of gear teeth and accumulator dial positions. To accommodate shillings and pence, extra columns were added for the tens digit[s], 10 and 20 for shillings, and 10 for pence. Of course, these functioned as radix-20 and radix-12 mechanisms.
A variant of the Marchant, called the Binary-Octal Marchant, was a radix-8 (octal) machine. It was sold to check very early vacuum-tube (valve) binary computers for accuracy. (Back then, the mechanical calculator was much more reliable than a tube/valve computer.)
As well, there was a twin Marchant, comprising two pinwheel Marchants with a common drive crank and reversing gearbox. Twin machines were relatively rare, and apparently were used for surveying calculations. At least one triple machine was made.
The Facit calculator, and one similar to it, are basically pinwheel machines, but the array of pinwheels moves sidewise, instead of the carriage. The pinwheels are biquinary; digits 1 through 4 cause the corresponding number of sliding pins to extend from the surface; digits 5 through 9 also extend a five-tooth sector as well as the same pins for 6 through 9.
The keys operate cams that operate a swinging lever to first unlock the pin-positioning cam that is part of the pinwheel mechanism; further movement of the lever (by an amount determined by the key's cam) rotates the pin-positioning cam to extend the necessary number of pins.
Stylus-operated adders with circular slots for the stylus, and side-by -side wheels, as made by Sterling Plastics (USA), had an ingenious anti-overshoot mechanism to ensure accurate carries.
The end of an era
Mechanical calculators continued to be sold, though in rapidly decreasing numbers, into the early 1970s, with many of the manufacturers closing down or being taken over. Comptometer type calculators were often retained for much longer to be used for adding and listing duties, especially in accounting, since a trained and skilled operator could enter all the digits of a number in one movement of the hands on a comptometer quicker than was possible serially with a 10-key electronic calculator. In fact, it was quicker to enter larger digits in two strokes using only the lower-numbered keys; for instance, a 9 would be entered as 4 followed by 5. Some key-driven calculators had keys for every column, but only 1 through 5; they were correspondingly compact. The spread of the computer rather than the simple electronic calculator put an end to the comptometer. Also, by the end of the 1970s, the slide rule had become obsolete.
See also
Abacus
Adding machine
Calculator
History of computing hardware
Mechanical computer
Tabulating machine
George Brown (inventor)
References
Sources
Reprinted by Arno Press, 1972 .
External links
Mařík, Robert
Office equipment
Mathematical tools
Articles containing video clips |
2384270 | https://en.wikipedia.org/wiki/Bernhard%20Preim | Bernhard Preim | Bernhard Preim (born 1969) is a specialist in human–computer interface design as well as in visual computing for medicine.
He is currently professor of visualization at University of Magdeburg, Germany.
Preim received the diploma in computer science in 1994 (minor in mathematics) and a PhD in 1998 from the Otto-von-Guericke University Magdeburg (PhD thesis "Interactive Illustrations and Animations for the Exploration of Spatial Relations", supervised by Thomas Strothotte). In 1999, he joined the staff of MeVis (Center for Medical Diagnosis System and Visualization, headed by Heinz-Otto Peitgen). In close collaboration with radiologists and surgeons, he directed the work on "computer-aided planning in liver surgery" and initiated several projects funded by the German Research Council in the area of computer-aided surgery. In June 2002, he received the Habilitation degree (venia legendi) for computer science from the University of Bremen. Since Mars 2003 he is full professor for "Visualization" at the computer science department at the Otto-von-Guericke-University of Magdeburg, heading a research group which is focussed on medical visualization and applications in surgical education and surgery planning. These developments are summarized in a comprehensive textbook Visualization in Medicine (Co-author Dirk Bartz), which appeared at Morgan Kaufmann in June 2007.
Bernhard Preim was founding speaker of the working group Medical Visualization in the German Society for Computer Science (2003–2012). He is the chair of the scientific advisory board of ICCAS and since 2013 president of the CURAC (German Society for Computer- and Roboter-assisted Surgery, ) and Visiting Professor at the University of Bremen where he closely collaborates with MeVis Research (now Fraunhofer MEVIS). Together with Charl Botha, TU Delft, he founded the VCBM Eurographics workshop series.
Books
Visual Computing for Medicine, Bernhard Preim and Charl Botha, 2013, San Francisco: Morgan Kaufmann.
Interaktive Systeme, Bernhard Preim and Raimund Dachselt 2010, Berlin: Springer.
Visualization in Medicine, Bernhard Preim and Dirk Bartz, 2007, San Francisco: Morgan Kaufmann.
Entwicklung interaktiver Systeme: Grundlagen, Fallbeispiele und innovative Anwendungsfelder, Bernhard Preim, 1999, Berlin: Springer.
Interaktive Illustrationen und Animationen zur Erklarung komplexer raumlicher Zusammenhange, Bernhard Preim, 1998, Düsseldorf: VDI Verlag.
Selected papers
How to Render Frames and Influence People, Thomas Strothotte, Bernhard Preim, Andreas Raab, Jutta Schumann, David R. Forsey, In: Computer Graphics Forum (13) 3, Proceedings of euroGraphics 1994, pp. 455–466, 1994.
Coherent zooming of illustrations with 3D-graphics and text, Bernhard Preim, Andreas Raab, Thomas Strothotte, Proceedings of the conference on Graphics interface '97, , Canadian Information Processing Society.
Visualization and Interaction Techniques for the Exploration of Vascular Structures. Horst K. Hahn, Bernhard Preim, Dirk Selle, Heinz-Otto Peitgen, IEEE Visualization'2001.
Analysis of vasculature for liver surgical planning, Selle, D., Preim, B., Schenk, A., Peitgen, H.-O., IEEE Transactions on Medical Imaging, Nov 2002, Volume: 21, Issue: 11, pp 1344–1357, .
Integration of Measurement Tools in Medical 3d Visualizations. Bernhard Preim, Christian Tietjen, Wolf Spindler, Heinz-Otto Peitgen. IEEE Visualization'2002.
Visualization of Vascular Structures: Method, Validation and Evaluation. Steffen Oeltze and Bernhard Preim, IEEE Transactions on Medical Imaging, 24(4), April 2005, pp. 540–549
Combining Silhouettes, Surface, and Volume Rendering for Surgery Education and Planning,Christian Tietjen, Tobias Isenberg, Bernhard Preim. EuroVis'2005. pp. 303~310
Real-Time Illustration of Vascular Structures for Surgery, Felix Ritter, Christian Hansen, Bernhard Preim, Volker Dicken, and Olaf Konrad-Verse. IEEE Transactions on Visualization, 12:877–884, 2006
Viewpoint Selection for Intervention Planning, Konrad Mühler, Mathias Neugebauer, Christian Tietjen, Bernhard Preim, EuroVis'2007. pp. 267–274
External links
Information on Preim
Google Scholar results for Bernhard Preim
References
OP-Planung am Computer in Magdeburg
Virtuelles Training anhand von 3D-Modellen für Chirurgen
1969 births
Living people
German computer scientists
University of Bremen alumni
Otto von Guericke University Magdeburg alumni
Otto von Guericke University Magdeburg faculty |
4496518 | https://en.wikipedia.org/wiki/Topological%20quantum%20computer | Topological quantum computer | A topological quantum computer is a theoretical quantum computer proposed by Russian-American physicist Alexei Kitaev in 1997. It employs two-dimensional quasiparticles called anyons, whose world lines pass around one another to form braids in a three-dimensional spacetime (i.e., one temporal plus two spatial dimensions). These braids form the logic gates that make up the computer. The advantage of a quantum computer based on quantum braids over using trapped quantum particles is that the former is much more stable. Small, cumulative perturbations can cause quantum states to decohere and introduce errors in the computation, but such small perturbations do not change the braids' topological properties. This is like the effort required to cut a string and reattach the ends to form a different braid, as opposed to a ball (representing an ordinary quantum particle in four-dimensional spacetime) bumping into a wall.
While the elements of a topological quantum computer originate in a purely mathematical realm, experiments in fractional quantum Hall systems indicate these elements may be created in the real world using semiconductors made of gallium arsenide at a temperature of near absolute zero and subjected to strong magnetic fields.
Introduction
Anyons are quasiparticles in a two-dimensional space. Anyons are neither fermions nor bosons, but like fermions, they cannot occupy the same state. Thus, the world lines of two anyons cannot intersect or merge, which allows their paths to form stable braids in space-time. Anyons can form from excitations in a cold, two-dimensional electron gas in a very strong magnetic field, and carry fractional units of magnetic flux. This phenomenon is called the fractional quantum Hall effect. In typical laboratory systems, the electron gas occupies a thin semiconducting layer sandwiched between layers of aluminium gallium arsenide.
When anyons are braided, the transformation of the quantum state of the system depends only on the topological class of the anyons' trajectories (which are classified according to the braid group). Therefore, the quantum information which is stored in the state of the system is impervious to small errors in the trajectories. In 2005, Sankar Das Sarma, Michael Freedman, and Chetan Nayak proposed a quantum Hall device that would realize a topological qubit. In a key development for topological quantum computers, in 2005 Vladimir J. Goldman, Fernando E. Camino, and Wei Zhou claimed to have created and observed the first experimental evidence for using a fractional quantum Hall effect to create actual anyons, although others have suggested their results could be the product of phenomena not involving anyons. Non-abelian anyons, a species required for topological quantum computers, have yet to be experimentally confirmed. Possible experimental evidence has been found, but the conclusions remain contested. In 2018, scientists again claimed to have isolated the required Majorna particles, but the finding was retracted in 2021. Quanta Magazine stated in 2021 that "no one has convincingly shown the existence of even a single (Majorana zero-mode) quasiparticle".
Topological vs. standard quantum computer
Topological quantum computers are equivalent in computational power to other standard models of quantum computation, in particular to the quantum circuit model and to the quantum Turing machine model. That is, any of these models can efficiently simulate any of the others. Nonetheless, certain algorithms may be a more natural fit to the topological quantum computer model. For example, algorithms for evaluating the Jones polynomial were first developed in the topological model, and only later converted and extended in the standard quantum circuit model.
Computations
To live up to its name, a topological quantum computer must provide the unique computation properties promised by a conventional quantum computer design, which uses trapped quantum particles. In 2000, Michael H. Freedman, Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang proved that a topological quantum computer can, in principle, perform any computation that a conventional quantum computer can do, and vice versa.
They found that a conventional quantum computer device, given an error-free operation of its logic circuits, will give a solution with an absolute level of accuracy, whereas a topological quantum computing device with flawless operation will give the solution with only a finite level of accuracy. However, any level of precision for the answer can be obtained by adding more braid twists (logic circuits) to the topological quantum computer, in a simple linear relationship. In other words, a reasonable increase in elements (braid twists) can achieve a high degree of accuracy in the answer. Actual computation [gates] are done by the edge states of a fractional quantum Hall effect. This makes models of one-dimensional anyons important. In one space dimension, anyons are defined algebraically.
Error correction and control
Even though quantum braids are inherently more stable than trapped quantum particles, there is still a need to control for error inducing thermal fluctuations, which produce random stray pairs of anyons which interfere with adjoining braids. Controlling these errors is simply a matter of separating the anyons to a distance where the rate of interfering strays drops to near zero. Simulating the dynamics of a topological quantum computer may be a promising method of implementing fault-tolerant quantum computation even with a standard quantum information processing scheme. Raussendorf, Harrington, and Goyal have studied one model, with promising simulation results.
Example: Computing with Fibonacci anyons
One of the prominent examples in topological quantum computing is with a system of Fibonacci anyons. In the context of conformal field theory, fibonacci anyons are described by the Yang–Lee model, the SU(2) special case of the Chern–Simons theory and Wess–Zumino–Witten models. These anyons can be used to create generic gates for topological quantum computing. There are three main steps for creating a model:
Choose our basis and restrict our Hilbert space
Braid the anyons together
Fuse the anyons at the end, and detect how they fuse in order to read the output of the system.
State preparation
Fibonacci anyons are defined by three qualities:
They have a topological charge of . In this discussion, we consider another charge called which is the ‘vacuum’ charge if anyons are annihilated with each-other.
Each of these anyons are their own antiparticle. and .
If brought close to each-other, they will ‘fuse’ together in a nontrivial fashion. Specifically, the ‘fusion’ rules are:
Many of the properties of this system can be explained similarly to that of two spin 1/2 particles. Particularly, we use the same tensor product and direct sum operators.
The last ‘fusion’ rule can be extended this to a system of three anyons:
Thus, fusing three anyons will yield a final state of total charge in 2 ways, or a charge of in exactly one way. We use three states to define our basis. However, because we wish to encode these three anyon states as superpositions of 0 and 1, we need to limit the basis to a two-dimensional Hilbert space. Thus, we consider only two states with a total charge of . This choice is purely phenomenological. In these states, we group the two leftmost anyons into a 'control group', and leave the rightmost as a 'non-computational anyon'. We classify a state as one where the control group has total 'fused' charge of , and a state of has a control group with a total 'fused' charge of . For a more complete description, see Nayak.
Gates
Following the ideas above, adiabatically braiding these anyons around each-other will result in a unitary transformation. These braid operators are a result of two subclasses of operators:
The F matrix
The R matrix
The R matrix can be conceptually thought of as the topological phase that is imparted onto the anyons during the braid. As the anyons wind around each-other, they pick up some phase due to the Aharonov-Bohm effect.
The F matrix is a result of the physical rotations of the anyons. As they braid between each-other, it is important to realize that the bottom two anyons—the control group—will still distinguish the state of the qubit. Thus, braiding the anyons will change which anyons are in the control group, and therefore change the basis. We evaluate the anyons by always fusing the control group (the bottom anyons) together first, so exchanging which anyons these are will rotate the system. Because these anyons are non-abelian, the order of the anyons (which ones are within the control group) will matter, and as such they will transform the system.
The complete braid operator can be derived as:
In order to mathematically construct the F and R operators, we can consider permutations of these F and R operators. We know that if we sequentially change the basis that we are operating on, this will eventually lead us back to the same basis. Similarly, we know that if we braid anyons around each-other a certain number of times, this will lead back to the same state. These axioms are called the pentagonal and hexagonal axioms respectively as performing the operation can be visualized with a pentagon/hexagon of state transformations. Although mathematically difficult, these can be approached much more successfully visually.
With these braid operators, we can finally formalize the notion of braids in terms of how they act on our Hilbert space and construct arbitrary universal quantum gates.
See also
Ginzburg–Landau theory
Husimi Q representation
Random matrix
Topological defect
Toric code
References
Further reading
Quantum field theory
Quantum information science
Classes of computers
Models of computation
Topology
Quantum computing |
30599599 | https://en.wikipedia.org/wiki/Eric%20Horvitz | Eric Horvitz | Eric Joel Horvitz () is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Horvitz was elected a member of the National Academy of Engineering in 2013 for computational mechanisms for decision making under uncertainty and with bounded resources.
Biography
Horvitz received his Ph.D and M.D. from Stanford University. His doctoral dissertation, Computation and Action Under Bounded Resources, and follow-on research introduced models of bounded rationality founded in probability and decision theory. He did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes.
He is currently the Chief Scientific Officer of Microsoft. He has been elected Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the National Academy of Engineering (NAE), the American Academy of Arts and Sciences, and of the American Association for the Advancement of Science (AAAS). He was elected to the ACM CHI Academy in 2013 and ACM Fellow 2014 "For contributions to artificial intelligence, and human-computer interaction."
He was elected to the American Philosophical Society in 2018.
In 2015, he was awarded the AAAI Feigenbaum Prize, a biennial award for sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems.
In 2015, he was also awarded the ACM - AAAI Allen Newell Award, for "contributions to artificial intelligence and human-computer interaction spanning the computing and decision sciences through developing principles and models of sensing, reflection, and rational action."
He serves on the President's Council of Advisors on Science and Technology (PCAST), the Scientific Advisory Committee of the Allen Institute for Artificial Intelligence (AI2), and the Computer Science and Telecommunications Board (CSTB) of the US National Academies.
He has served as president of the Association for the Advancement of AI (AAAI), on the NSF Computer & Information Science & Engineering (CISE) Advisory Board, on the council of the Computing Community Consortium (CCC), chair of the Section on Information, Computing, and Communications of the American Association for the Advancement of Science (AAAS), on the Board of Regents of the US National Library of Medicine (NLM), and the U.S. National Security Commission on AI.
Work
Horvitz's research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of machine learning and inference, information retrieval, human-computer interaction, bioinformatics, and e-commerce.
Horvitz played a significant role in the use of probability and decision theory in artificial intelligence. His work raised the credibility of artificial intelligence in other areas of computer science and computer engineering, influencing fields ranging from human-computer interaction to operating systems. His research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality, a decision-theoretic approach to bounded rationality. The influences of bounded optimality extend beyond computer science into cognitive science and psychology.
He studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems in environments over time. In related work, he applied probability and machine learning to solve combinatorial problems and to guide theorem proving. He introduced the anytime algorithm paradigm in AI, where partial results, probabilities, or utilities of outcomes are refined with computation under different availabilities or costs of time, guided by the expected value of computation.
He has issued long-term challenge problems for AI—and has espoused a vision of open-world AI, where machine intelligences have the ability to understand and perform well in the larger world where they encounter situations they have not seen before.
He has explored synergies between human and machine intelligence. In this area, he studied the value of displayed information, methods for guiding machine versus human initiative, learning models of human attention, and using machine learning and planning to identify and merge the complementary abilities of people and AI systems. He has worked on making AI inference more understandable. His use of machine learning to build models of human surprise was featured as a technology breakthrough by MIT Technology Review.
He investigated the use of AI methods to provide assistance to users including help with software and in the daily life.
He made contributions to multimodal interaction. In 2015, he received the ACM ICMI Sustained Accomplishment Award for contributions to multimodal interaction. His work on multimodal interaction includes studies of situated interaction, where systems consider physical details of open-world settings and can perform dialog with multiple people.
He co-authored probability-based methods to enhance privacy, including a model of altruistic sharing of data called community sensing and risk-sensitive approaches including stochastic privacy.
He is Microsoft's top inventor.
He led efforts in applying AI methods to computing systems, including machine learning for memory management in Windows, web prefetching, graphics rendering, and web crawling. He did early work on AI for debugging software.
Horvitz speaks on the topic of artificial intelligence, including on NPR and the Charlie Rose show. Online talks include both technical lectures and presentations for general audiences (TEDx Austin: Making Friends with Artificial Intelligence). His research has been featured in The New York Times and MIT Technology Review.
He has testified before the US Senate on progress, opportunities, and challenges with AI.
AI and Society
He has addressed technical and societal challenges and opportunities with the fielding of AI technologies in the open world, including beneficial uses of AI, AI safety and robustness, and where AI systems and capabilities can have inadvertent effects, pose dangers, or be misused. He has presented on caveats with applications of AI in military settings. He and Thomas G. Dietterich called for work on AI alignment, saying that AI systems "must reason about what people intend rather than carrying out commands literally."
He has called for action on potential risks to civil liberties posed by government uses of data in AI systems. He and privacy scholar Deirdre Mulligan stated that society must balance privacy concerns with benefits of data for social benefit.
He has presented on the risks of AI-enabled deepfakes and contributed to media provenance technologies that cryptographically certify the source and history of edits of digital content.
Asilomar AI Study
He served as President of the AAAI from 2007–2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public.
In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control. In a later NPR interview, he said that investments in scientific studies of superintelligences would be valuable to guide proactive efforts even if people believed that the probability of losing of control of AI was low because of the cost of such outcomes.
One Hundred Year Study on Artificial Intelligence
In 2014, Horvitz defined and funded with his wife the One Hundred Year Study of Artificial Intelligence (AI100) at Stanford University. In 2016, the AI Index was launched as a project of the One Hundred Year Study.
According to Horvitz, the AI100 gift, which may increase in the future, is sufficient to fund the study for a century. A Stanford press release stated that sets of committees over a century will "study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play." A framing memo for the study calls out 18 topics for consideration, including law, ethics, the economy, war, and crime. Topics include abuses of AI that could pose threats to democracy and freedom and addressing possibilities of superintelligences and loss of control of AI.
The One Hundred Year Study is overseen by a Standing Committee. The Standing Committee formulates questions and themes and organizes a Study Panel every five years. The Study Panel issues a report that assesses the status and rate of progress of AI technologies, challenges, and opportunities with regard to AI's influences on people and society.
The 2015 study panel of the One Hundred Year Study, chaired by Peter Stone, released a report in September 2016, titled "Artificial Intelligence and Life in 2030." The panel advocated for increased public and private spending on the industry, recommended increased AI expertise at all levels of government, and recommended against blanket government regulation. Panel chair Peter Stone argues that AI won't automatically replace human workers, but rather, will supplement the workforce and create new jobs in tech maintenance. While mainly focusing on the next 15 years, the report touched on concerns and expectations that had risen in prominence over the last decade about the risks of superintelligent robots, stating "Unlike in the movies, there's no race of superhuman robots on the horizon or probably even possible. Stone stated that "it was a conscious decision not to give credence to this in the report."
The report of the second cycle of the AI100 study, chaired by Michael Littman, was published in 2016.
Founding of Partnership on AI
He co-founded and has served as board chair of the Partnership on AI, a non-profit organization bringing together Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft with representatives from civil society, academia, and non-profit R&D. The organization's website points at initiatives, including studies of risk scores in criminal justice, facial recognition systems, AI and economy, AI safety, AI and media integrity, and documentation of AI systems.
Microsoft Aether Committee
He founded and chairs the Aether Committee at Microsoft, Microsoft's internal committee on the responsible development and fielding of AI technologies. He reported that the Aether Committee had made recommendations on and guided decisions that have influenced Microsoft's commercial AI efforts. In April 2020, Microsoft published content on principles, guidelines, and tools developed by the Aether Committee and its working groups, including teams focused on AI reliability and safety, bias and fairness, intelligibility and explanation, and human-AI collaboration.
Publications
Books
Selected articles
Podcasts
References
External links
Profile page at Microsoft Research
One Hundred Year Study on Artificial Intelligence (AI100)
Audio: Challenge Problems for AI
TEDx Austin: Making Friends with Artificial Intelligence
NPR: Science Friday: Improving Healthcare One Search at a Time
BBC: "Artificial intelligence: How to turn Siri into Samantha"
Keynote address, Conference on Knowledge Discovery and Data Mining (SIGKDD), August 2014: Videolectures.net
Living people
American computer scientists
People in information technology
Microsoft employees
Fellows of the American Academy of Arts and Sciences
Fellows of the Association for the Advancement of Artificial Intelligence
Presidents of the Association for the Advancement of Artificial Intelligence
Members of the American Philosophical Society
Year of birth missing (living people) |
1213399 | https://en.wikipedia.org/wiki/Pinnacle%20Entertainment%20%28United%20Kingdom%29 | Pinnacle Entertainment (United Kingdom) | Pinnacle Entertainment was an entertainment group based in the United Kingdom spanning many divisions, but primarily known as one of the UK's leading independent record label distributors.
History
In 1996, Windsong/Pinnacle was purchased by the Zomba Group. When Zomba was subsequently purchased by BMG in 2002, Windsong/Pinnacle was moved under Bertelsmann's group Arvato AG. In January 2008, Windsong completed a buyout from Arvato, gaining independent ownership again. However, in the midst of an economic crisis, the company was forced to go into administration, effective December 3, 2008. It was then that Windsong Exports purchased the group and turned it into the UK's leading music distributor. Following that, the companies were usually referred to jointly as Windsong/Pinnacle.
Divisions
Audio Services
Pinnacle's music division Audio Services Ltd., was primarily known for the distribution of over 400 independent record labels in the UK, but it also consisted of its own record label, label management for distributed labels, and marketing and sales divisions.
Record labels
Pinnacle Records (artists include Flintlock, James Arthur Edwards)
Firebird Records (Artists included Nick Straker)
Collins Classics - Formed in 1989, closed in 1998.
Pinnacle Software
Formed in 1992, Pinnacle Software serves as a publishing company that represents many computer software companies such as 3MV, New Note, Shellshock and Beechwood. It provides retail, sales, marketing and logistics services to its publishing partners. Deals with Electronic Arts and Ubisoft helped introduce an exclusive distribution model to the UK Software market. Some brands associated with Pinnacle Software include FIFA, Harry Potter, Lord of the Rings, Splinter Cell and Prince of Persia.
Gauntlet Entertainment
Formed in 1996, Gauntlet Entertainment was a distributor of video games. Formed by Peter Rezon to create a focused distribution model for video games, similar to what Pinnacle Software had been doing with other UK software companies. The distribution model they pioneered became the de facto model used by most major software publishers in the UK. Gauntlet Entertainment succeeded in doing so with ground breaking deals with Sierra, NovaLogic, THQ International, CDV Software and many other International Publishing houses. It has been associated with many of the top brands in the market place, including WWE Smackdown!, Cossacks, Rugrats, Warcraft, Half Life, Scooby Doo, Tetris and many more.
See also
List of record labels
References
External links
Pinnacle Entertainment
Pinnacle Vision
Defunct record labels of the United Kingdom
Record labels disestablished in 2008
IFPI members |
18501594 | https://en.wikipedia.org/wiki/Meanings%20of%20minor%20planet%20names%3A%20188001%E2%80%93189000 | Meanings of minor planet names: 188001–189000 |
188001–188100
|-id=061
| 188061 Loomis || || Craig P. Loomis (born 1961), an American computing engineer with the Sloan Digital Sky Survey ||
|}
188101–188200
|-id=139
| 188139 Stanbridge || || Dale R. Stanbridge (born 1962), a senior engineer at KinetX who worked as a Navigation Team Member for the New Horizons mission to Pluto ||
|}
188201–188300
|-id=256
| 188256 Stothoff || || Maria M. Stothoff (born 1966), a Public Affairs Deputy Chief at the Southwest Research Institute who worked for the New Horizons mission to Pluto ||
|}
188301–188400
|-bgcolor=#f2f2f2
| colspan=4 align=center |
|}
188401–188500
|-id=446
| 188446 Louischevrolet || || Louis Chevrolet (1878–1941), a Swiss race car driver and co-founder of the Chevrolet Motor Car Company in 1911 ||
|}
188501–188600
|-id=502
| 188502 Darrellstrobel || || Darrell F. Strobel (born 1942), a research scientist at Johns Hopkins University who worked as a Co-Investigator for atmospheric science for the New Horizons mission to Pluto ||
|-id=534
| 188534 Mauna Kea || || Mauna Kea (4,207 m; meaning "White Mountain"), a dormant volcano on the island of Hawaii ||
|-id=576
| 188576 Kosenda || || Setsuo Kosenda (born 1955) established the Mikawa Astronomical Observatory, located in the Niigata region of Japan ||
|}
188601–188700
|-id=693
| 188693 Roosevelt || || Theodore Roosevelt (1858–1919) was the 26th President of the United States and is one of the most admired leaders in American history. Among his many accomplishments he is well known for his conservationism, having established the US Forest Service, five National Parks, 18 National Monuments, and 150 National Forests. ||
|}
188701–188800
|-bgcolor=#f2f2f2
| colspan=4 align=center |
|}
188801–188900
|-id=847
| 188847 Rhipeus || || Rhipeus (Ripheus), from Classical mythology. The Trojan warrior died fighting alongside his comrade Aeneas during the Trojan War. Rhipeus, the most just of the Trojans was not rewarded by the gods (Virgil). ||
|-id=867
| 188867 Tin Ho || || Tin Ho, or Tianhe District, is one of the fastest developing areas in Guangzhou, China. Many of Guangzhou's most iconic buildings are found in this district. Tin Ho is the Chinese name for the Milky Way. ||
|-id=894
| 188894 Gerberlouis || || Louis Gerber (1928–2021), a Swiss banker and amateur astronomer from Fribourg, who was the first treasurer of the Robert A. Naef Foundation, which operates the Observatory Naef Épendes, where this minor planet was discovered. ||
|}
188901–189000
|-id=973
| 188973 Siufaiwing || || Siu Fai Wing (born 1946), a Chinese painter and sculptor ||
|-id=000
| 189000 Alfredkubin || || Alfred Kubin (1877–1959) is considered an important representative of expressionism ||
|}
References
188001-189000 |
37931961 | https://en.wikipedia.org/wiki/Max%20Browne | Max Browne | Max Austin Browne (born February 2, 1995) is an American football analyst and former quarterback. He played college football for the USC Trojans (2013–2016) and Pittsburgh Panthers (2017).
Browne committed to USC on April 4, 2012, during his junior year, and was considered the best quarterback recruit of his class by Rivals.com and Scout.com. He transferred to Pitt on December 15, 2016, as a graduate transfer.
Early years
Born and raised in Sammamish, Washington, a suburb east of Seattle, Browne attended Beaver Lake Middle School and Skyline High School in Sammamish. During his high school career, he completed 73.5 percent of his passes for 12,951 yards and 146 touchdowns. He was the Gatorade Player of the Year for Washington in 2011 and 2012, and led the Spartans to three straight Class 4A state finals, winning the final two.
Following his senior season, Browne participated in the 2013 U.S. Army All-American Bowl and was awarded the prestigious Hall Trophy as U.S. Army Player of the Year.
College career
On April 4, 2012, Browne committed to play football at the University of Southern California. He selected USC as his college destination over Oklahoma, Washington, and Alabama. In 2013, Browne was redshirted as a true freshman, after failing to beat out Cody Kessler and Max Wittek for the starting quarterback job. Head coach Lane Kiffin eventually named Kessler the starting quarterback in the fall. In 2014, after Steve Sarkisian took over as head coach, Browne again lost the starting quarterback battle to Kessler, who was named the starting quarterback during spring practice.
In 2015, Browne was once again the backup to Kessler. With Kessler's departure to the NFL Draft after the 2015 season, Browne was the presumptive starter going into spring practice in 2016. Instead, he faced a stiff challenge from redshirt freshman Sam Darnold. On August 20, 2016, Browne was officially named the starting quarterback by head coach Clay Helton.
Browne started his first game as a Trojan quarterback in a 52–6 loss to Alabama on September 3. On September 19, Browne was replaced by Sam Darnold as starting quarterback, a move that prompted speculation that the redshirt junior would transfer at the end of the season to take advantage of his status as a graduate student to start immediately with a new team. He eventually transferred to the University of Pittsburgh. At Pitt, Browne had a successful spring, being voted Captain and earning the starting quarterback job. However, after starting the first three games of the season and going 1-2 (beating Youngstown State but losing to both Penn State and Oklahoma State), Browne was replaced by redshirt sophomore Ben DiNucci. DiNucci went 12-19 with 1 touchdown in the Panthers next game, but the Panthers lost their third game of the season to Georgia Tech, 17-35. Browne was then given another shot to spark this offense for their game versus Rice and he did just that. Throwing for 410 yards, 28-32 (88% completion/attempts), and four touchdowns, the Panthers won 42-10.
On October 7, 2017, Browne suffered a shoulder injury on a sack during the game against Syracuse. It was later revealed that the injury required surgery and Browne would miss the rest of the season.
Statistics
Source:
Broadcasting career
Browne was not signed by a National Football League team after his college career ended; in December 2018, he attended a tryout with the Pittsburgh Steelers but was not offered a contract. During the 2018 college football season, he began analyzing Pac-12 Conference teams on YouTube with hopes of becoming a television color commentator.
In 2019, he became a pre- and post-game analyst for USC on KABC (AM), while also working as a contributor for TrojanSports.com.
Personal life
Browne is engaged to his fiancée, former USC volleyball player and mental health & body image advocate Victoria Garrick. On August 20, 2021 they got engaged.
References
External links
Pittsburgh Panthers bio
USC Trojans bio
1995 births
Living people
American football quarterbacks
People from Sammamish, Washington
Pittsburgh Panthers football players
Players of American football from Washington (state)
Sportspeople from King County, Washington
USC Trojans football players |
35284565 | https://en.wikipedia.org/wiki/List%20of%20colleges%20and%20universities%20in%20Metro%20Manila | List of colleges and universities in Metro Manila | This is a partial list of colleges and universities located inside the Metro Manila region only, Philippines.
State universities and local colleges
City of Malabon University
City University of Pasay
Colegio de Muntinlupa
Eulogio "Amang" Rodriguez Institute of Science and Technology (EARIST)
Pamantasan ng Lungsod ng Marikina, J.P. Rizal Campus
Pamantasan ng Lungsod ng Marikina, H. Bautista Campus
Pamantasan ng Lungsod ng Maynila Main Campus
Pamantasan ng Lungsod ng Maynila, District Colleges
Pamantasan ng Lungsod ng Muntinlupa
Pamantasan ng Lungsod ng Pasig
Pamantasan ng Lungsod ng Valenzuela
Parañaque City College of Science and Technology
Pateros Technological College
Philippine Normal University
Philippine Public Safety College
Polytechnic University of the Philippines Manila (Main Campus)
Polytechnic University of the Philippines Parañaque
Polytechnic University of the Philippines Quezon City
Polytechnic University of the Philippines San Juan
Polytechnic University of the Philippines Taguig
Marikina Polytechnic College
National Defense College of the Philippines
Navotas Polytechnic College
Quezon City University
Quezon City University - Batasan Hills
Quezon City University - San Bartolome
Quezon City University - San Francisco
Rizal Technological University
Taguig City University
Universidad de Manila (formerly City College of Manila)
University of the Philippines
University of the Philippines Diliman
University of the Philippines Manila
University of Caloocan City (formerly Caloocan City Polytechnic College)
University of Makati
Valenzuela City Polytechnic College
Private universities and colleges
A
ABE International Business College Caloocan
ABE International Business College Cubao
ABE International Business College Fairview
ABE International Business College Las Piñas
ABE International Business College Makati
ABE International Business College Manila
ABE International Business College Taft
Access Computer College Manila
Access Computer College Cubao
Access Computer College Camarin
Access Computer College Lagro
Access Computer College Marikina
Access Computer College Monumento
Access Computer College Pasig
Access Computer College Zabarte
Adamson University
Air Link International Aviation College
AMA Computer Learning Center Commonwealth
AMA Computer University
UST Angelicum College
Arellano University
Arellano University, Malabon
Arellano University, Mandaluyong
Arellano University, Pasay
Arellano University, Pasig
Asia Pacific College
Asia School of Arts and Sciences
Asian College Quezon City
Asian Institute of Computer Studies Bicutan
Asian Institute of Computer Studies Caloocan
Asian Institute of Computer Studies Commonwealth
Asian Institute for Distance Education
Asian Institute of Journalism and Communication
Asian Institute of Management
Asian Institute of Maritime Studies
Asian School of Hospitality Arts
Asian Social Institute
Asian Summit College
Assumption College San Lorenzo
Ateneo de Manila University
B
Bestlink College of the Philippines
Bernardo College
C
CAP College Foundation
Central Colleges of the Philippines
Centro Escolar University
Centro Escolar University Makati
Chiang Kai Shek College
Chinese General Hospital Colleges
CIIT College of Arts and Technology
Colegio De Muntinlupa
Colegio de San Juan de Letran
Colegio de San Lorenzo
Colegio de Santa Teresa de Avila – Novaliches, Quezon City
College of Divine Wisdom
College of the Holy Spirit Manila
Concordia College (Manila)
College of St.Catherine Quezon City
College of Arts & Sciences of Asia & the Pacific
D
Datamex Institute of Computer Technology
Dee Hwa Liong College Foundation
De La Salle Araneta University
De La Salle–College of Saint Benilde
De La Salle University
De Ocampo Memorial College – Nagtahan, Santa Mesa, Manila
Don Bosco Technical College
DLS–STI College
Dr. Filemon C. Aguilar Memorial College of Las Piñas
Dr. Carlos S. Lanting College (Quezon City)
Dominican College (San Juan)
E
Emmanuel John Institute of Science & Technology (EJIST)
Emilio Aguinaldo College
Enderun Colleges
Entrepreneurs School of Asia (formerly Thames International School)
F
Far Eastern University
Far Eastern University – Makati
Far Eastern University Alabang (FEU Alabang)
Far Eastern University Diliman (FEU Diliman)
Far Eastern University – Nicanor Reyes Medical Foundation (FEU NRMF)
FEU Roosevelt Marikina (FEU Roosevelt)
Far Eastern University – Institute of Technology (FEU Tech)
FEATI University
G
Gateways Institute of Science and Technology (3 campuses in MM)
Global City Innovative College, Bonifacio Global City
Grace Christian College
Greenville College
Greatways Technical Institute Makati
Global Reciprocal Colleges Caloocan
Governor Andres Pascual College
Guzman College of Science and Technology in Quiapo, Manila
I
Immaculada Concepcion College
Immaculate Heart of Mary College-Parañaque
Imus Computer College (ICC) (2 campuses in MM)
Informatics International College (multiple campuses)
iAcademy – Makati
Infotech Institute of Arts and Sciences
Integrated Innovations and Hospitality College, Inc. (IIH College) – Novaliches, Zabarte
Interface Computer College (ICC)
ICC Manila
ICC Caloocan
International Academy of Management and Economics
International Baptist College (I.B.C.)
International Electronics and Technical Institute Inc. (I.E.T.I.) (multiple campuses)
J
Jose Rizal University
Jesus Reigns Christian College
K
Kalayaan College
L
La Concordia College
La Consolacion College Manila
La Verdad Christian College
Lacson College Pasay
Lyceum of Alabang (not affiliated with Lyceum of the Philippines University)
Lyceum of the Philippines University
M
Mandaluyong College
Manila Adventist College
Manila Business College
Manila Central University
Manila Christian Computer Institute for the Deaf
The Manila Times College
Manila Tytana Colleges (formerly Manila Doctors College)
Manuel L. Quezon University
Mapúa University
Mapúa University - Makati
Mary Chiles College of Nursing
Mary Johnston College of Nursing
MFI Technological Institute
Meridian International Business, Arts & Technology College (MINT College)
Medici di Makati College
Metro Manila College, U-Site Jordan Plaines, Kaligayahan, Novaliches, Quezon City
Metropolitan Hospital College of Nursing
Metropolitan Medical Center College of Arts, Science & Technology
Miriam College
N
NAMEI Polytechnic Institute
National Christian Life College
National College of Business and Arts
NCBA - Cubao, Quezon City
NCBA - Fairview, Quezon City
NCBA - Taytay, Rizal
National Teachers College Manila
National University (Philippines)
NU - MOA
New Era University, Quezon City
O
Olivarez College
Olivarez College - Tagaytay
Our Lady of Fatima University
OLFU-Quezon City
Our Lady of Guadalupe Colleges, Mandaluyong
Our Lady of Perpetual Succor College
P
Pacific InterContinental College (PIC), Inc.
Pasig Catholic College
PATTS College of Aeronautics
Perpetual Help College of Manila
Philippine Christian University
Philippine College of Criminology
Philippine College of Health Sciences
Philippine Cultural College
Philippine Merchant Marine Academy
Philippine Merchant Marine School
Philippine Rehabilitation Institute
Philippine School of Business Administration Manila
Philippine State College of Aeronautics
Philippine Women's University
Philsin College Foundation
PMI Colleges (formerly Philippine Maritime Institute)
PMI Colleges Manila
PMI Colleges Quezon City
S
St. Clare College of Caloocan
St. Dominic Savio College
Saint Francis of Assisi College
SFAC Las Piñas
SFAC Taguig
SFAC Muntinlupa
St. James College of Quezon City
Saint Joseph's College of Quezon City
Saint Jude College
St. Louis College Valenzuela
Saint Mary's College of Quezon City
St. Paul University System
St. Paul University Manila
St. Paul University Quezon City
St. Paul College of Makati
St. Paul College of Parañaque
St. Paul College Pasig
Saint Pedro Poveda College
Saint Rita College (Manila)
St. Rita College Parañaque
St. Scholastica's College Manila
Saint Theresa's College of Quezon City
San Beda University
San Beda University(Mendiola, Manila)
San Beda College-Alabang(Alabang Hills Village, Muntinlupa)
San Juan de Dios Educational Foundation – Pasay
San Sebastian College – Recoletos de Manila
Santa Catalina College
Santa Isabel College Manila
Southeastern College
Siena College of Quezon City
Southville Foreign University
Southville International School and Colleges
South Mansfield College - Muntinlupa (Supervised by Southville International School and Colleges)
South SEED LPDH College
STI College
STI College - Alabang
STI College - Caloocan
STI College - Cubao
STI College - Fairview
STI College - Global City
STI College - Las Pinas
STI College - Makati
STI College - Marikina
STI College - Muñoz
STI College - Novaliches
STI College - Parañaque
STI College - Pasay
STI College - Quezon Avenue
STI College - Recto
STI College - Shaw
STI College - Taft Avenue
T
Technological Institute of the Philippines
Technological Institute of the Philippines - Manila
Technological Institute of the Philippines - Quezon City
The One School
Technological University of the Philippines
Technological University of the Philippines Manila (Main Campus)
Technological University of the Philippines-Taguig Campus
Thames Business School
Trace College Makati
Treston International College Taguig
Trinity University of Asia (formerly Trinity College of Quezon City)
U
Unciano Paramedical Colleges Inc. – Guadalcanal, Santa Mesa, Manila
Universal College
University of Asia and the Pacific, Ortigas Center, Pasig
University of Manila
University of Perpetual Help System DALTA
University of Santo Tomas
University of the East
University of the East Caloocan
University of the East Ramon Magsaysay Memorial Medical Center
W
Wesleyan College of Manila
West Bay College
World Citi Colleges
WCC Caloocan
WCC Quezon City
See also
List of universities and colleges in the Philippines (located in provinces in Luzon, Visayas, and Mindanao regions)
Higher education in the Philippines
References
External links
List of universities in Metro Manila
List of universities in Manila (city)
Metro Manila
Universities
Manila |
7932845 | https://en.wikipedia.org/wiki/Software%20blueprint | Software blueprint | A software blueprint is the final product of a software blueprinting process. Its name derives from an analogy with the term blueprint as used within the traditional construction industry. Therefore, a true software blueprint should share a number of key properties with its building-blueprint counterpart. Software blueprinting relies on achieving a clean separation between logically orthogonal aspects of the software. Once that is achieved, it facilitates the localization of related logic and use of an optimal description medium for each of the logically independent components (for each blueprint).
Properties
Software blueprints focus on one application aspect, for clarity of presentation and to ensure that all of the relevant logic is localized. The localization of aspect logic is intended to improve navigability, and this is based on the assumption that the application programmer most commonly wishes to browse application aspects independently.
The single-aspect focus of a software blueprint means that an optimal description medium can be selected. For example, algorithmic code may be best represented using textual code, whereas a graphical user interface may be best represented using a form design. Selection of an intuitive description medium, i.e., one that matches well with mental models and designs for a particular aspect, may improve:
Ease of navigation
Ease of understanding
Fault detection rate
Ability to manage complexity
Ease to develop errors
Examples
GUI form design
The GUI form design (see GUI toolkit) is widely adopted across the software industry and allows the programmer to specify a prescriptive description of the appearance of GUI widgets within a window. This description can be translated directly to the code that draws the GUI (because it is prescriptive).
Machine translatable co-ordination languages (e.g. CDL)
Languages such as the Concurrent Description Language (CDL) separate an application's macroscopic logic (communication, synchronization and arbitration) from complex multi-threaded and/or multi-process applications into a single contiguous visual representation. The prescriptive nature of this description means that it can be machine translated into an executable framework that may be tested for structural integrity (detection of race conditions, deadlocks etc.) before the microscopic logic is available.
Class designers
Class designers allow the specification of arbitrarily complex data structures in a convenient form and the prescriptive nature of this description allows generation of executable code to perform list management, format translation, endian swapping and so on.
Software designers
Classes are used as building blocks by software designers to model more complex structures. In software architecture the Unified Modeling Language (UML) is an industry standard used for modeling the blueprint of software. UML represents structure, associations and interactions between various software elements, like classes, objects or components. It helps the software designer to design, analyze and communicate ideas to other members of the software community.
See also
Software design
Software architecture |
52193232 | https://en.wikipedia.org/wiki/Play%3A5 | Play:5 | The Play:5 (branded as the PLAY:5, formerly the ZonePlayer S5) is a smart speaker developed by Sonos, announced on October 13 and released on November 5, 2009, and is the debut product in the Play line of products. It is one of the compatible speakers designed to initiate SonosNet, stereo pair with itself and pair additionally with the Playbar and SUB to initiate a basic home theater system.
History
The ZonePlayer S5, originally what the Play:5 was branded, was announced on October 13, 2009 and released on November 5. In May 2010, Sonos released a software update that allowed the speaker to initiate a stereo pair with another ZonePlayer S5 and was given improvements, such as crossfading and an alarm. In June, a black color option for the speaker was made available and the device was made available in Singapore. In August 2011, Sonos started referring the S5 as the Play:5.
In July 2012, Sonos Studio announced the "Light House", an interactive structure that was paired with the Play:3 and SUB speakers and was hosted by The Crystal Method. In September 2014, another software update gave the Play:5, plus the Play:3, Play:1 and Playbar independence to run SonosNet through Wi-Fi without connecting a Bridge, Boost or itself by Ethernet to a router following a public beta test for the feature. In September 2015, Sonos announced the redesigned, second-generation Play:5 alongside the tuning software, Trueplay.
Features
The speaker is able to use Wi-Fi independently or itself or a Boost via Ethernet to run SonosNet, a peer-to-peer mesh network which allows the user to play media on some, one or all speakers connected to the network. It also can establish a stereo pair with another Play:5 to create a separate left and right audio channel and can pair additionally with a Playbar or SUB to create a home theater system.
Design
Hardware
The Play:5 is the current luxury model of the Play line of products, with the first generation having two tweeters and drivers plus a SUBwoofer, with 5 Class-D amplifiers. However, the second generation has three tweeters and mid-woofers, with 6 Class-D amplifiers, in a phased array. Like its brother products, it also has an Ethernet port. It is the only Play speaker that has a headphone jack, two microphones and an AUX port, and is the second in the lineup to house an LED indicator alongside the Play:3.
Software
Sonos Controller sets up and controls the speaker only, with all third and first-party apps incompatible with the speakers. Furthermore, other music services such as Spotify, Apple Music, Tidal, Pandora and others can be connected to the app and streamed through.
Reception
The Play:5 has been positively received by critics. The first generation was praised for its ease of access, sound and design, but criticized its limited functionality and then-software. CNET praised the first-generation, stating that "puts it somewhere between the Bose SoundDock II and the larger SoundDock 10.". Will Smith of Tested criticized its price and was befuddled at its inclusion of a headphone jack, but stated that "The S5 is a great addition to the Sonos product line. However, if you haven't already bought into the Sonos vision of total-home music, it may not be the best fit." What Hi-Fi? concluded also that "Although this partnership results in great sound, some would see it as excessively pricey and large for a bedroom, kitchen or study." "Pricing is our only concern with the Sonos Zoneplayer S5 and its family of products" was also a concern from GoodGearGuide. In a less positive review, PC Magazine said "simply put, this is the cheapest way for current iPod touch and iPhone owners to enter the seamless world of home audio streaming that Sonos provides."
The second generation received more praise. Critics praised its increase in audio quality, with CNET noting that "this speaker appears to be about 20 to 25 percent bigger than the previous Play:5 and it delivers significantly more bass" and Engadget also concluded that it "the Play:5 is the best speaker Sonos has ever made." PCMag also gave the second generation an Editor's Choice, and that "from a multi-room home audio standpoint, Sonos still leads the pack." Tom's Guide derided the speaker for not having Bluetooth support and exclusivity to Sonos' software, but favored its design, quality and praised its stereo pairing. The Guardian criticized the stereo pair for "almost [being] too loud at times, particularly if you’re looking for background music at night for reading" but gave the Play:5 a positive review nevertheless.
References
Smart speakers
Sonos |
27093938 | https://en.wikipedia.org/wiki/5476%20Mulius | 5476 Mulius | 5476 Mulius, provisional designation: is a mid-sized Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 2 October 1989, by American astronomer Schelte Bus at the Cerro Tololo Inter-American Observatory in Chile. The dark Jovian asteroid has a rotation period of 5.8 hours. It was numbered in March 1993. In November 2021, it was named after the Trojan warrior Mulius, who was killed by Achilles during the Trojan War, according to Greek mythology.
Classification and orbit
Mulius is a Jupiter trojan which stays in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population.
It orbits the Sun at a distance of 4.7–5.5 AU once every 11 years and 7 months (4,217 days; semi-major axis of 5.11 AU). Its orbit has an eccentricity of 0.07 and an inclination of 14° with respect to the ecliptic. The asteroid was first observed at Palomar Observatory in August 1952. One year later, the body's observation arc begins at Palomar in August 1953, or more than 36 years prior to its official discovery observation at Cerro Tololo.
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 8 March 1993 (). On 29 November 2021, IAU's Working Group Small Body Nomenclature it from Greek mythology after Trojan warrior Mulius, who was killed during the Trojan War by Achilles who drove his javelin through one ear and out the other of Mulius' head.
Physical characteristics
Mulius is an assumed C-type asteroid. Jovian asteroids are typically D-types, with the remainder being mostly carbonaceous C- and primitive P-type asteroids.
Rotation period
Four nights of photometric observations of this asteroid were used to build a lightcurve showing a rotation period of hours with a brightness variation of 0.30 magnitude. The well-defined lightcurve was obtained during February 1994 by Stefano Mottola and Anders Erikson using the ESO 1-metre telescope at the La Silla Observatory in Chile ().
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, the asteroid measures 35.1 kilometers in diameter and its surface has an albedo of 0.099, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a larger diameter of 42.2 kilometers.
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center
Asteroid (5476) 1989 TO11 at the Small Bodies Data Ferret
005476
Discoveries by Schelte J. Bus
Minor planets named from Greek mythology
Named minor planets
19891002 |
61173154 | https://en.wikipedia.org/wiki/PipeWire | PipeWire | PipeWire is a server for handling audio and video streams and hardware on Linux. It was created by Wim Taymans at Red Hat. It handles multimedia routing and pipeline processing.
History
In 2015, Taymans started work on PipeWire. It was based on ideas from a couple of projects, including one called PulseVideo by William Manley.
Christian Schaller stated "it draws many of its ideas from an early prototype by William Manley called PulseVideo and builds upon some of the code that was merged into GStreamer due to that effort."
A goal was to improve handling of video on Linux the same way PulseAudio improved handling of audio.
Although a separate project Taymans initially considered using name PulseVideo for the new project. By June 2015, the name Pinos was being used (named after a city Wim has lived in, Pinos de Alhaurin in Spain).
Initially, PipeWire only handled video streams. By early 2017, Taymans had started working on integrating audio streams. He wanted the project to support for both consumer and professional audio use cases. For advice on professional audio implementation he consulted Paul Davis and Robin Gareus. At this time the name PipeWire was adopted for the project.
In November 2018, PipeWire was re-licensed from the LGPL to the MIT License.
In April 2021, Fedora became the first Linux distribution to ship PipeWire for audio by default through its release 34.
Features
The project aims include:
To work with sandboxed Flatpak applications.
To provide secure methods for screenshotting and screencasting on Wayland compositors.
To unify handling of cases handled by JACK and PulseAudio.
Reception
PipeWire has received much praise, especially among the GNOME and Arch Linux communities. Particularly, it fixes many problems that PulseAudio had experienced, including its high CPU usage, Bluetooth connection issues, and its JACK backend issues.
References
External links
Presentation of Pinos by Wim Taymans
The PipeWire multimedia framework and its potential in AGL (PDF)
PulseVideo
PipeWire: A Low-Level multimedia subsystem (PDF)
PipeWire Under The Hood
Audio software for Linux
Video software for Linux |
56656614 | https://en.wikipedia.org/wiki/Abdullah%20Alswaha | Abdullah Alswaha | Abdullah bin Amer Alswaha (Arabic: عبدالله بن عامر السواحه) is the current Minister of Communications and Information Technology in Saudi Arabia. His ministry is responsible for developing Saudi Arabia's ICT infrastructure and workforce as part of the country's Vision 2030 programme. He was previously managing director of Cisco Saudi Arabia.
Education
Alswaha has bachelors degrees in computer science and electrical engineering. He studied as an undergraduate in Seattle and at the King Fahd University of Petroleum and Minerals. He has since done executive education programs at Harvard Business School.
Public Service
On May 3, 2021, the Custodian of the Two Holy Mosques King Salman bin Abdulaziz issued a Royal Decree appointing Alswaha to the post of chairman of the Saudi Space Commission.
In March of the same year, he was appointed as chairman of the board of directors for the King Abdulaziz City for Science and Technology.
He is also Minister of Communications and Information Technology, and Chairman of the Communications and Information Technology Commission (CITC), posts he has held since April of 2017.
Private Sector Career
Alswaha began his career in 2005 at Cisco Saudi Arabia, becoming director of operations and overseeing the development of the company's systems engineering, public sector, commercial, service provider and channel operations. In May 2016, he was appointed managing director.
High Profile Participations
At the Future Investment Initiative conference in Riyadh in October 2017, Alswaha announced a partnership between his ministry, the philanthropic organisation MiSK, and the Mohammed bin Salman College for Business and Entrepreneurship to put young local entrepreneurs and start-ups through a one-year training programme with big tech partners from the US and Europe. The same month he announced that Saudi Post, the government-operated postal system, would enter a five-year ‘corporatization phase’ before full privatization.
He has previously affirmed Saudi Arabia's commitment to business-friendly policy and has called for greater empowerment of women in the kingdom as part of the push to modernize its economy and workforce.
At the World Economic Forum in Davos in January 2018 Alswaha outlined Saudi Arabia's digital vision, announcing spending of $3 billion to roll out a nationwide 5G network and connect two million homes with high speed fiber optic.
At the Mobile World Congress in Barcelona, Alsawaha announced that in the framework of the Saudi Vision 2030, Saudi Arabia is planning to be a leading digital market in the Middle East and South Africa regions. Moreover, with the annual budget of $22 billion in the digital and IT sector, Saudi Arabia has become of the top ranked digital markets worldwide.
Primary Responsibilities
Cabinet Minister of the Kingdom of Saudi Arabia
Ministry of Communications and Information Technology
Saudi Space Commission, Chairman
King Abdulaziz City for Science and Technology, Chairman
Additional Responsibilities and Affiliations
Communications and Information Technology Commission (CITC), Chairman
Saudi Post, Chairman
National Digitization Committee, Executive Chairman
National Committee for Digital Transformation, Chairman
Board Member, Misk Foundation
Board Member, Neom
Board Member, Council of Economic and Development Affairs
Board Member, Economic Cities Authority
Board Member, World Economic Forum Centre for the Fourth Industrial Revolution Global Network Advisory
Previous positions
Abdullah Alswaha formerly held other positions, including member of the Technology Advisory Board at Prince Mohammed Bin Fahd University (PMU). He was also the founder of the Social Entrepreneurship Incubator, which supports entrepreneurs to develop solutions in the fields of healthcare and education.
References
Government ministers of Saudi Arabia
Communications in Saudi Arabia
Year of birth missing (living people)
Living people |
40383082 | https://en.wikipedia.org/wiki/SAP%20HANA | SAP HANA | SAP HANA (high-performance analytic appliance) is an in-memory, column-oriented, relational database management system developed and marketed by SAP SE. Its primary function as the software running a database server is to store and retrieve data as requested by the applications. In addition, it performs advanced analytics (predictive analytics, spatial data processing, text analytics, text search, streaming analytics, graph data processing) and includes extract, transform, load (ETL) capabilities as well as an application server.
History
During the early development of SAP HANA, a number of technologies were developed or acquired by SAP SE. These included TREX search engine (in-memory column-oriented search engine), P*TIME (in-memory online transaction processing (OLTP) Platform acquired by SAP in 2005), and MaxDB with its in-memory liveCache engine.
The first major demonstration of the platform was in 2008: teams from SAP SE, the Hasso Plattner Institute and Stanford University demonstrated an application architecture for real-time analytics and aggregation called HYRISE. Former SAP SE executive, Vishal Sikka, mentioned this architecture as "Hasso's New Architecture". Before the name "HANA" stabilized, people referred to this product as "New Database". The software was previously called "SAP High-Performance Analytic Appliance".
A first research paper on HYRISE was published in November 2010. The research engine is later released open source in 2013, and was reengineered in 2016 to become HYRISE2 in 2017.
The first product shipped in late November 2010.
By mid-2011, the technology had attracted interest but more experienced business customers considered it to be "in early days".
HANA support for SAP NetWeaver Business Warehouse was announced in September 2011 for availability by November.
In 2012, SAP promoted aspects of cloud computing. In October 2012, SAP announced a platform as a service offering called the SAP HANA Cloud Platform and a variant called SAP HANA One that used a smaller amount of memory.
In May 2013, a managed private cloud offering called the HANA Enterprise Cloud service was announced.
In May 2013, Business Suite on HANA became available, enabling customers to run SAP Enterprise Resource Planning functions on the HANA platform.
S/4HANA, released in 2015, written specifically for the HANA platform, combines functionality for ERP, CRM, SRM and others into a single HANA system.
S/4HANA is intended to be a simplified business suite, replacing earlier generation ERP systems. While it is likely that SAP will focus its innovations on S/4HANA, some customers using non-HANA systems have raised concerns of being locked into SAP products. Since S/4HANA requires an SAP HANA system to run, customers running SAP business suite applications on hardware not certified by SAP would need to migrate to a SAP-certified HANA database should they choose the features offered by S/4HANA.
Rather than versioning, the software utilizes service packs, referred to as Support Package Stacks (SPS), for updates. Support Package Stacks are released every 6 months.
In November 2016 SAP announced SAP HANA 2, which offers enhancements to multiple areas such as database management and application management and includes two new cloud services: Text Analysis and Earth Observation Analysis.
HANA customers can upgrade to HANA 2 from SPS10 and above. Customers running SPS9 and below must first upgrade to SPS12 before upgrading to HANA 2 SPS01.
Architecture
Overview
The key distinctions between HANA and previous generation SAP systems are that it is a column-oriented, in-memory database, that combines OLAP and OLTP operations into a single system; thus in general SAP HANA is an OLTAP system. Storing data in main memory rather than on disk provides faster data access and, by extension, faster querying and processing. While storing data in-memory confers performance advantages, it is a more costly form of data storage. Observing data access patterns, up to 85% of data in an enterprise system may be infrequently accessed therefore it can be cost-effective to store frequently accessed, or "hot", data in-memory while the less frequently accessed "warm" data is stored on disk, an approach SAP have termed "Dynamic tiering".
Column-oriented systems store all data for a single column in the same location, rather than storing all data for a single row in the same location (row-oriented systems). This can enable performance improvements for OLAP queries on large datasets and allows greater vertical compression of similar types of data in a single column. If the read times for column-stored data is fast enough, consolidated views of the data can be performed on the fly, removing the need for maintaining aggregate views and its associated data redundancy.
Although row-oriented systems have traditionally been favored for OLTP, in-memory storage opens techniques to develop hybrid systems suitable for both OLAP and OLTP capabilities, removing the need to maintain separate systems for OLTP and OLAP operations.
The index server performs session management, authorization, transaction management and command processing. The database has both a row store and a columnar store. Users can create tables using either store, but the columnar store has more capabilities and is most frequently used. The index server also manages persistence between cached memory images of database objects, log files and permanent storage files. The XS engine allows web applications to be built.
SAP HANA Information Modeling (also known as SAP HANA Data Modeling) is a part of HANA application development. Modeling is the methodology to expose operational data to the end user. Reusable virtual objects (named calculation views) are used in the modelling process.
MVCC
SAP HANA manages concurrency through the use of multiversion concurrency control (MVCC), which gives every transaction a snapshot of the database at a point in time. When an MVCC database needs to update an item of data, it will not overwrite the old data with new data, but will instead mark the old data as obsolete and add the newer version.
Big data
In a scale-out environment, HANA can keep volumes of up to a petabyte of data in memory while returning query results in under a second. However, RAM is still much more expensive than disk space, so the scale-out approach is only feasible for certain time critical use cases.
Analytics
SAP HANA includes a number of analytic engines for various kinds of data processing. The Business Function Library includes a number of algorithms made available to address common business data processing algorithms such as asset depreciation, rolling forecast and moving average.
The Predictive Analytics Library includes native algorithms for calculating common statistical measures in areas such as clustering, classification and time series analysis.
HANA incorporates the open source statistical programming language R as a supported language within stored procedures.
The column-store database offers graph database capabilities. The graph engine processes the Cypher Query Language and also has a visual graph manipulation via a tool called Graph Viewer. Graph data structures are stored directly in relational tables in HANA's column store. Pre-built algorithms in the graph engine include pattern matching, neighborhood search, single shortest path, and strongly connected components. Typical usage situations for the Graph Engine include examples like supply chain traceability, fraud detection, and logistics and route planning.
HANA also includes a spatial database engine which implements spatial data types and SQL extensions for CRUD operations on spatial data. HANA is certified by the Open Geospatial Consortium, and it integrates with ESRI's ArcGIS geographic information system.
In addition to numerical and statistical algorithms, HANA can perform text analytics and enterprise text search. HANA's search capability is based on “fuzzy” fault-tolerant search, much like modern web-based search engines. Results include a statistical measure for how relevant search results are, and search criteria can include a threshold of accuracy for results. Analyses available include identifying entities such as people, dates, places, organizations, requests, problems, and more. Such entity extraction can be catered to specific use cases such as Voice of the Customer (customer's preferences and expectations), Enterprise (i.e. mergers and acquisitions, products, organizations), and Public Sector (public persons, events, organizations). Custom extraction and dictionaries can also be implemented.
Application development
Besides the database and data analytics capabilities, SAP HANA is a web-based application server, hosting user-facing applications tightly integrated with the database and analytics engines of HANA. The "XS Advanced Engine" (XSA) natively works with Node.js and JavaEE languages and runtimes. XSA is based on Cloud Foundry architecture and thus supports the notion of “Bring Your Own Language”, allowing developers to develop and deploy applications written in languages and in runtimes other than those XSA implements natively, as well as deploying applications as microservices. XSA also allows server-side JavaScript (XSJS).
Supporting the application server is a suite of application lifecycle management tools allowing development deployment and monitoring of user-facing applications.
Deployment
HANA can be deployed on-premises or in the cloud from a number of cloud service providers.
HANA can be deployed on-premises as a new appliance from a certified hardware vendor.
Alternatively, existing hardware components such as storage and network can be used as part of the implementation, an approach which SAP calls "Tailored Data Center Integration (TDI)".
HANA is certified to run on multiple operating systems
including SUSE Linux Enterprise Server
and Red Hat Enterprise Linux.
Supported hardware platforms for on-premise deployment include Intel 64 and POWER Systems.
The system is designed to support both horizontal and vertical scaling.
Multiple cloud providers offer SAP HANA on an Infrastructure as a Service basis, including:
Amazon Web Services
Microsoft Azure
Google Cloud Platform
IBM Softlayer
Huawei FusionSphere
HP Helion
SAP also offer their own cloud services in the form of:
SAP HANA Enterprise Cloud, a private managed cloud
SAP Business Technology Platform (previously known as SAP Cloud Platform and HANA Cloud Platform), Platform as a service
Editions
SAP HANA licensing is primarily divided into two categories.
Runtime License:
Used to run SAP applications such as SAP Business Warehouse powered by SAP HANA and SAP S/4HANA.
Full Use License:
Used to run both SAP and non-SAP applications. This licensing can be used to create custom applications.
As part of the full use license, features are grouped as editions targeting various use cases.
Base Edition: Provides core database features and development tools but does not support SAP applications.
Platform Edition: Base edition plus spatial, predictive, R server integration, search, text, analytics, graph engines and additional packaged business libraries.
Enterprise Edition: Platform edition plus additional bundled components for some of the data loading capabilities and the rule framework.
In addition, capabilities such as streaming and ETL are licensed as additional options.
As of March 9, 2017, SAP HANA is available in an Express edition; a streamlined version which can run on laptops and other resource-limited environments. The license for SAP HANA, express edition is free of charge, even for productive use up to 32 GB of RAM. Additional capacity increases can be purchased.
See also
Comparison of relational database management systems
Comparison of object-relational database management systems
Database management system
List of relational database management systems
List of column-oriented DBMSes
List of in-memory databases
List of databases using MVCC
References
External links
SAP HANA, Developer edition
When to Use an In-Memory Database
SAP HANA on IBM Power Systems
SAP HANA : In-Memory made in SAP
SAP SE
2010 software
Proprietary database management systems
Big data products
Column-oriented DBMS software for Linux
Proprietary commercial software for Linux |
60658833 | https://en.wikipedia.org/wiki/Thomas%20Thackeray | Thomas Thackeray | Thomas Thackeray DD (1693 – 25 August 1760) was a Church of England clergyman who taught at his old school, Eton College, and ended his career as Head Master of Harrow School.
Life
Born in 1693, a son of Timothy Thackeray, parish clerk of Hampsthwaite, in the West Riding of Yorkshire (now in North Yorkshire), who was one of a family of yeomen, Thackeray was born at Hampsthwaite and from 1706 was educated as a King's Scholar at Eton College. In 1712, he became a scholar of King's College, Cambridge, was elected a fellow of the college in 1715, and then returned to his old school, Eton, as an assistant schoolmaster. In 1728, he gained a benefice, as Rector of Little Chishall, Essex, which he held until his death. The same year, he was appointed as Rector of Heydon. In 1729, he married Anne Woodward, and with her went on to have sixteen children.
In 1743, Thackeray narrowly missed being elected as provost of his old college, King's. In 1746, he was chosen as Head Master of Harrow School, and in 1748 as chaplain to Frederick, Prince of Wales. The prince's early death in 1751 dashed some of his hopes of future royal preferment, but he remained on excellent terms with the new heir to the throne, Prince George, and in 1753 Benjamin Hoadly, Bishop of Winchester, gave him the sinecure of archdeacon of Surrey, which was worth another £130 a year, .
Thackeray was looked on at Harrow as a wise appointment, as he had many useful social contacts, especially among the nobility. In the 1750s, Prince George's friend (and future prime minister) John Stuart, 3rd Earl of Bute, sent his sons to Harrow.
When he died in 1760, a month after retiring as head master, fourteen of Thackeray's sixteen children were still alive, and he was able to leave each of them £300, . His widow lived on until 1797, when she left a house at Eton and an estate worth £10,000.
Descendants
Thackeray's fourth son, Thomas Thackeray (1736–1806), was a surgeon at Cambridge and was the father of William Makepeace Thackeray (1770–1849), a physician at Chester; Elias Thackeray (1771–1854), Vicar of Dundalk, in County Louth; John Richard Thackeray (1772–1846), churchman; and Jane Townley Thackeray (1788–1871), who in 1813 married George Pryme, an economist.
Thackeray's fifth son, Frederick (1737–1782), was a physician at Windsor and was the father of General Frederick Rennell Thackeray; George Thackeray, Provost of King's College, Cambridge; and Jane, who married James Rennell in 1772.
Thackeray's other fourteen children included Anne, John, Alethia, Henrietta, Martha, Theodosia, and Decima.
The youngest of Thomas Thackeray's children, William Makepeace Thackeray (1749–1813), became a clerk of the East India Company and made a fortune in India. He was designated as the first Collector of Sylhet in 1772. He returned to England four years later. His children included Francis Thackeray, author of the Life of Lord Chatham (1827); and Richmond Thackeray, born in 1781, Secretary to the Board of Revenue at Calcutta, who married Anne Becher, a daughter of John Harman Becher, and was the father of the novelist William Makepeace Thackeray (1811–1863), an only child who was born at Calcutta in 1811.
Thackeray is a direct ancestor of the comedian Al Murray, a great-great-great-grandson of the novelist William Makepeace Thackeray.
Notes
External links
Will of Thomas Thackeray, Doctor in Divinity of Harrow , Middlesex at nationalarchives.gov.uk
1693 births
1760 deaths
Doctors of Divinity
Alumni of King's College, Cambridge
Fellows of King's College, Cambridge
Head Masters of Harrow School
People educated at Eton College
People from Nidderdale |
1032305 | https://en.wikipedia.org/wiki/Kadena%20Air%20Base | Kadena Air Base | (IATA: DNA, ICAO: RODN) is a United States Air Force base in the towns of Kadena and Chatan and the city of Okinawa, in Okinawa Prefecture, Japan. It is often referred to as the "Keystone of the Pacific". Kadena Air Base is home to the USAF's 18th Wing, the 353rd Special Operations Group, reconnaissance units, 1st Battalion, 1st Air Defense Artillery Regiment, and a variety of associated units. Over 20,000 American servicemembers, family members, and Japanese employees live or work aboard Kadena Air Base. It is the largest and most active U.S. Air Force base in East Asia.
History
Kadena Air Base's history dates back to just before the Battle of Okinawa in April 1945, when a local construction firm completed a small airfield named Yara Hikojo near the village of Kadena. The airfield, used by the Imperial Japanese Army Air Force, was one of the first targets of the Tenth United States Army 7th Infantry Division. The United States seized it from the Japanese during the battle.
World War II
What the Americans captured was a strip of badly-damaged coral runway. "The initial work at Kadena was accomplished by the 1901st Aviation Engineer Battalion 7th U.S. Infantry Division and Naval Construction Battalion Maintenance Unit CBMU 624 on 4 April", by nightfall the same day, the runway could accept emergency landings. Eight days later, and after some of coral were added, the airfield was declared operational and put into immediate service by artillery spotting aircraft. Additional construction was performed by the 807th Engineering Aviation Battalion to improve the airfield for USAAF fighter and bomber use with fuel tank farms, a new bituminous runway, and a runway for bomber aircraft, by August.
Kadena airfield was initially under the control of Seventh Air Force, however on 16 July 1945, Headquarters Eighth Air Force was transferred, without personnel, equipment, or combat elements to the town of Sakugawa, near Kadena from RAF High Wycombe England. Upon reassignment, its headquarters element absorbed the command staff of the inactivated XX Bomber Command. Kadena was used by the headquarters staff for administrative flying requirements.
Upon its reassignment to the Pacific Theater, Eighth Air Force was assigned to the U.S. Army Strategic Air Forces with a mission to train new B-29 Superfortress bomber groups arriving from the United States for combat missions against Japan. In the planned invasion of Japan, the mission of Eighth Air Force would be to conduct strategic bombing raids from Okinawa. However, the atomic bombings of Japan led to the Japanese surrender before Eighth Air Force saw action in the Pacific theater.
The surrender of Japanese forces in the Ryukyu Islands came on 7 September. General Joseph Stilwell accepted the surrender in an area that would later become Kadena's Stearley Heights housing area.
Known World War II units assigned to Kadena were:
319th Bombardment Group (Light) (July–November 1945) (A-26 Invader)Assigned to Seventh Air Force and flew missions to Japan and China, attacking airdromes, shipping, marshalling yards, industrial centers, and other objectives.
317th Troop Carrier Group (August–September 1945) (C-46 Commando, C-47 Skytrain)Assigned to Seventh Air Force in the Philippines. Deployed aircraft to Kadena and flew courier and passenger routes to Japan, Guam, Korea, and the Philippines, and transported freight and personnel in the area.
333d Bombardment Group (Very Heavy) (August 1945 – May 1946) (B-29)Assigned to Eighth Air Force for planned invasion of Japan. Operations terminated before the group could enter combat. For a time after the war the group ferried Allied prisoners of war from Japan to the Philippines. Inactivated May 1946.
346th Bombardment Group (Very Heavy) (August 1945 – June 1946) (B-29)Assigned to Eighth Air Force for planned invasion of Japan. Operations terminated before the group could enter combat. After the war the group participated in several show-of-force missions over Japan and for a time ferried Allied prisoners of war from Okinawa to the Philippines. Inactivated June 1946.
316th Bombardment Wing (September 1945 – June 1948)Assigned to Eighth Air Force for planned invasion of Japan. Operations terminated before the group could enter combat. Reassigned to U.S. Far East Air Forces January 1946. Redesignated as 316th Composite Wing in January 1946, and 316th Bombardment Wing (Very Heavy) in May 1946. Inactivated June 1948.
413th Fighter Group (November 1945 – October 1946) (P-47N)Assigned to Eighth Air Force and served as a part of the air defense and occupation force for the Ryukyu Islands after the war. Inactivated October 1946.
On 7 June 1946, Headquarters Eighth Air Force moved without personnel or equipment to MacDill AAF, Florida. It was replaced by the 1st Air Division which directed fighter reconnaissance, and bomber organizations and provided air defense for the Ryukyu Islands until December 1948.
Twentieth Air Force became the command and control organization for Kadena on 16 May 1949.
Postwar years and the Korean War
The Korean War emphasized the need for maintaining a naval presence on Okinawa. On 15 February 1951, the U.S. Naval Facility, Naha, was activated and later became commissioned on 18 April. Commander Fleet Activities, Ryukyus was commissioned on 8 March 1957. On 15 May 1972, upon reversion of Okinawa to Japanese administration, the two organizations were combined to form Commander Fleet Activities, Okinawa. With the relocations of Commander Fleet Activities, Okinawa to Kadena Air Base on 7 May 1975, the title then became Commander Fleet Activities, Okinawa/US Naval Air Facility, Kadena.
Twentieth Air Force was inactivated in March 1955. Fifth Air Force became the command and control organization for Kadena. Known major postwar USAAF/USAF units assigned to Kadena have been:
6th Bombardment Group (Very Heavy) (June 1947 – October 1948) (B-29)Participated in show-of-force flights over Japan and dropped food and other relief supplies to newly freed Allied prisoners of war. Inactivated October 1948.
71st Tactical Reconnaissance Wing (August 1948 – October 1948) (F-5, F-6, RF-51, RF-61)Equipped with reconnaissance aircraft, flew aerial photographing missions over Japan and southern Korea. Inactivated October 1948. The 71st Air Base Group provided base host unit support for organizations assigned to Kadena.
32d Composite Wing (August 1948 – April 1949) (RB/SB-17G, C-46, RB/SB-29)Replaced 71st Tactical Reconnaissance Wing. Provided photographic reconnaissance and search and rescue support. The 32d Air Base Group provided base host unit support for organizations assigned to Kadena.
6332d Air Base Group (April 1949 – January 1950) (redesignated 6332d Air Base Wing (January 1950 – May 1955), 6313th Air Base Wing (October 1957 – December 1964))Provided base host unit support for organizations assigned to Kadena.
19th Bombardment Group (Medium) (July 1950 – May 1954) (B-29)Deployed from Andersen AFB, Guam. Flew combat missions over Korea. Reassigned May 1954 to Pinecastle AFB, Florida.
22d Bombardment Group (Medium) (July 1950 – October 1950) (B-29)Deployed from March AFB, California. Flew combat missions over Korea
307th Bombardment Group (Medium) (September 1950 – February 1951) (B-29)Deployed from MacDill AFB Florida to engage in combat operations during the Korean War. While on Okinawa, the 307th was awarded the Republic of Korea Presidential Unit Citation for its air strikes against enemy forces in Korea. It was also awarded the Distinguished Unit Citation and several campaign streamers. The 307th BG returned from deployment during February 1951, however elements of the group remained deployed on Okinawa on a semi-permanent basis until 1954.
581st Air Resupply Group (September 1953 – September 1956) (B-29)reassigned from the inactivating 581st Air Resupply and Communications Wing at Clark AB, Philippines. Performed unconventional warfare and counterinsurgency psychological operations. Inactivated and mission transferred to U.S. Navy.
At the end of the Eisenhower presidency, around 1,700 nuclear weapons were deployed on shore in the Pacific, 800 of which were at Kadena Air Base.
18th Wing
On 1 November 1954, the 18th Fighter-Bomber Wing arrived from Osan Air Base, South Korea. Under changing designations, the wing has been the main USAF flying force at Kadena for over 50 years. The wing has maintained assigned aircraft, crews, and supporting personnel in readiness to respond to orders from Fifth Air Force and Pacific Air Forces. The wing initially was flying three squadrons of North American F-86 Sabre: the 12th, 44th and 67th Fighter Squadrons. The wing flew tactical fighter sorties from Okinawa, and made frequent deployments to South Korea, Japan, Formosa, and the Philippines. In 1957, the wing upgraded to the F-100 Super Sabre and the designation was changed to the 18th Tactical Fighter Wing. In 1960, a tactical reconnaissance mission was added to the wing with the arrival of the McDonnell F-101 Voodoo and the 15th Tactical Reconnaissance Squadron.
Vietnam War era
Beginning in 1961, the 18th TFW was sending its tactical squadrons frequently to South Vietnam and Thailand, initially with its RF-101 reconnaissance jets, and beginning in 1964 with its tactical fighter forces supporting USAF combat missions in the Vietnam War. In 1963, the F-105 Thunderchief replaced the Super Sabres. During the Temporary duty assignment (TDY) deployments to Southeast Asia, the 12th TFS lost four aircraft, the 44th TFS lost one F-105D, and the 67th TFS lost nine aircraft, including three on the first day of Operation Rolling Thunder. The deployments to Southeast Asia continued until the end of United States involvement in the conflict.
The RF-4C Phantom II replaced the RF-101 in the reconnaissance role in 1967. An electronic warfare capability was added to the wing in late 1968 with the attachment of the 19th Tactical Electronic Warfare Squadron from Shaw AFB South Carolina flying the EB-66 Destroyer. The B-66s remained until 1970, flying daily over the skies of Southeast Asia.
During the 1968 Pueblo crisis, the 18th deployed between January and June to Osan AB, South Korea following the North Korean seizure of the vessel. Frequent deployments to South Korea have been performed ever since to maintain the air defense alert mission there.
In 1972, the 1st Special Operations Squadron was assigned, bringing their specialized C/MC-130 Hercules aircraft to the wing. The squadron was reassigned in 1978. The reconnaissance mission ended in 1989 with the retirement of the RF-4Cs, and the inactivation of the 15th TRS.
Post-Vietnam
The F/RF-4C Phantom II replaced the F-105s in 1971, and a further upgrade to the F-15 Eagle was made in 1979.
On 6 November 1972, the 18th Tactical Fighter Wing dispatched the McDonnell Douglas F-4C / D Phantom II fighter jets of the 44th Tactical Fighter Squadron and the 67th Tactical Fighter Squadron to the Ching Chuan Kang Air Base, Taiwan until 31 May 1975. Assist Taiwan ’s air defense against threats from China.
The designation of the wing changed on 1 October 1991 to the 18th Wing with the implementation of the Objective Wing concept. With the objective wing, the mission of the 18th expanded to the Composite Air Wing concept of multiple different wing missions with different aircraft. The mission of the 18th was expanded to include aerial refueling with KC-135 Stratotanker tanker aircraft; and surveillance, warning, command and control E-3 Sentry, and communications. Added airlift mission in June 1992 with the C-12 Huron, transporting mission critical personnel, high-priority cargo and distinguished visitors. In February 1993, the 18th Wing gained responsibility for coordinating rescue operations in the Western Pacific and Indian Ocean.
Arrival of Patriot unit
In November 2006, the U.S. Army's 1st Battalion, 1st Air Defense Artillery Regiment, a Patriot PAC-III unit, deployed to Kadena from Fort Bliss Texas. Originally assigned to 31st ADA Brigade of Fort Bliss, they were reassigned to the 94th AAMDC, USINDOPACOM. The move was part of the BRAC consolidation of U.S. Army bases and security agreements between the U.S. and Japan. The battalion's mission is to defend the base against tactical ballistic missiles from North Korea. The deployment was controversial on Okinawa, being greeted with protests.
Foreign units
Other U.S. allies who had expressed intention with the approval of both the Japanese government and the United States Air Force to host units in the air base to further impose united cooperation against regional threats; North Korea and the growing influence of China in the Asia Pacific.
Australia and New Zealand
In early September 2018, Australian Defence Minister Christopher Pyne and New Zealand's Deputy Prime Minister Winston Peters stated that it was in their interest to aid both Japan and the United States against North Korea with patrol aircraft. These units would provide additional capability to prevent North Korean vessels conducting illegal trading out at sea in violation of UN sanctions. The Royal Australian Air Force deployed two AP-3C aircraft, along with a P-3K2 Orion from the RNZAF. RAAF P-8 Poseidons have subsequently been periodically deployed to Kadena as part of Operation Argos. RNZAF Orions also periodically operate from Kadena, with four such deployments having been made as of April 2021.
Other units
Other major units assigned to Kadena since 1954 have been:
313th Air Division (March 1955 – October 1991)Assumed responsibility for air defense of the Ryukyu Islands and tactical operations in the Far East, maintaining assigned forces at the highest possible degree of combat readiness. In addition, it supported Fifth Air Force in the development, planning, and coordination of requirements for future Air Force operations in the Ryukyu Islands. The division also supported numerous exercises such as Cope Thunder, Cope Diamond, Team Spirit and Cope North. Provided base host unit support for organizations assigned to Kadena (May 1955 – October 1957, December 1964 – October 1974). The newly-considated 18th Wing replaced the 313th Air Division in 1991.
Kadena Task Force (Provisional) (SAC) (May 1955 – May 1958) (RB/ERB-47H)Performed Electronic Reconnaissance and Countermeasures activities.
498th Tactical Missile Group (February 1961 – October 1969) (TM-76B/CGM-13B)Equipped with the TM-76B, renumbered in 1963 to CGM-13B Mace guided cruise missile, four hard site launch sites.
4252d Strategic Wing (SAC) (January 1965 – April 1970)376th Strategic Wing (SAC) (April 1970 – August 1973) (B-52, KC-135, EC-135)Activated by SAC at Kadena. Replaced 4252nd Strategic Wing. Conducted B-52 combat operations in Southeast Asia from January 1965 to September 1970, when Arc Light Missions from the base were terminated. The distance to targets in South Vietnam resulted in reduced payload and greater air-refueling demands for Kadena and Guam based B-52s and from April 1967 the USAF began basing B-52s at U-Tapao Royal Thai Navy Airfield, this together with Japanese opposition to the war led to reducing B-52 operations from Kadena. Conducted KC-135 air refueling and RC-135 electronic reconnaissance from April 1970 to 1991. Conducted airborne radio relay operations, April–November 1970, February–June 1971 and March 1972 – August 1973. Until 1991, the wing controlled the 909th Air Refueling Squadron (KC-135A/Q/R) and supported rotational reconnaissance aircraft (TR-1, SR-71) after the inactivation of the 9th SRW in 1974. The Wing was inactivated at Kadena on 30 October 1991 with the drawdown of strategic forces. Its mission was absorbed by the 18th Wing.
9th Strategic Reconnaissance Wing (SAC) (1968–1974) (A-12, SR-71)Deployed from Beale Air Force Base, California, Performed strategic reconnaissance over North Vietnam and Laos. In March 1968 SR-71's began arriving at Kadena from Beale AFB. On 15 March Det OL-8 was declared Operational Ready for SR-71 sorties. With the completion of each mission a "Habu" was painted on the bird. The SR-71s averaged approximately one sortie a week for nearly two years. By 1970, the SR-71s were averaging two sorties per week. By 1972, the SR-71 was flying nearly one sortie every day. While deployed on Okinawa, the SR-71s and their aircrew members gained the nickname Habu (as did the A-12s preceding them) after a southeast Asian pit viper which the Okinawans thought the plane resembled. The SR 71 mission on Okinawa ended in 1990.
18th Combat Support Wing (1985–1991)The 18 CSW was originally the 18th Combat Support Group of the 18th Tactical Fighter Wing before being elevated to a Wing in 1985. It acted as the installation management command and controlled all the services requried to run the installation. With the consolidation of numerous missions into the 18th Wing in 1991, the 18 CSW was downgraded and redesignated the 18th Support Group. It was redesignated again as the 18th Mission Support Group in 2002.
Beacon
The USAF is responsible for maintenance.
Role and operations
The 18th Wing is the host unit at Kadena AB. In addition, the base hosts associate units from five other Air Force major commands, the United States Navy, and other Department of Defense agencies and direct reporting units. Associate units operate more than 20 permanently assigned, forward-based or deployed aircraft from the base on a daily basis.
18th Wing
The 18th Wing is the Air Force's largest and most diverse combat wing. The Wing is broken down into five groups: the 18th Operations Group, the 18th Maintenance Group, the 18th Mission Support Group, the 18th Civil Engineer Group, and the 18th Medical Group. Kadena's fleet of F-15C/D Eagles (the 44th and 67th Fighter Squadrons); KC-135R/T Stratotankers (the 909th Air Refueling Squadron); E-3 Sentry|E-3B/C Sentries (the 961st Airborne Air Control Squadron); and HH-60 Pave Hawks (the 33d Rescue Squadron).
353d Special Operations Group
The 353d Special Operations Group is an element of the Air Force Special Operations Command, Hurlburt Field, Florida. The 750 Airmen of the group are organized into the 1st Special Operations Squadron, the 17th Special Operations Squadron, a maintenance squadron, the 320th Special Tactics Squadron, and an operations support squadron. The flying squadrons operate the MC-130J Commando II, MC-130H Combat Talon II.
733d Air Mobility Squadron
This 733d Air Mobility Squadron manage all passengers and cargo traveling by air in and out of Kadena. This Air Mobility Command unit supports about 650 aircraft arrivals and departures every month, moving more than 12,000 passengers and nearly 3,000 tons of cargo.
82d Reconnaissance Squadron
Air Combat Command's 82d Reconnaissance Squadron maintains aircraft; prepares combat-ready aircrews; and analyzes, processes, and disseminates intelligence data launch in support of RC-135V/W Rivet Joint, RC-135U Combat Sent and WC-135 Constant Phoenix missions flown in the Pacific Theater.
390th Intelligence Squadron
This Air Intelligence Agency squadron conducts information operations by providing tailored combat intelligence and assessing the security of friendly command, control, communication and computer systems to enhance warfighting survivability, situation awareness and targeting.
US Army
The U.S. Army's 1st Battalion, 1st Air Defense Artillery Regiment, assigned to the 94th AAMDC is a Patriot PAC-3 battalion. It consists of four Patriot missile batteries (Alpha through Delta), a maintenance company (Echo) and a headquarters battery (HHB).
Housing Management Office
The Air Force Housing Management Office (HMO) manages Military Family Housing (MFH) for all service members assigned to Okinawa. Kadena Air Base contains nearly 4,000 family housing units, in apartment, townhouse, and single family home styles.
Other units
American Forces Network Detachment 11, AFNEWS
Det 3, PACAF Air Postal Squadron
Det 3, United States Air Force School of Aerospace Medicine
525 EMXS, Support Center Pacific
Det 3, Wr-Alc Air Force Petroleum Office
Det 624, Air Force Office of Special investigations
Det 233, Air Force Audit Agency
Field Training Detachment Det 15, 372nd Training Squadron
Defense Commissary Agency
DoDEA Pacific Director's Office
Department of Defense Education Activity Pacific-Okinawa District
Marine Wing Liaison Kadena
American Red Cross
Based units
Flying and notable non-flying units based at Kadena Air Base.
Units marked GSU are Geographically Separate Units, which although based at Kadena, are subordinate to a parent unit based at another location.
United States Air Force
Pacific Air Forces (PACAF)
Fifth Air Force
18th Wing (Host Wing)
Headquarters 18th Wing
18th Comptroller Squadron
18th Operations Group
18th Aeromedical Evacuation Squadron
18th Operations Support Squadron
31st Rescue Squadron
33rd Rescue Squadron – HH-60G Pave Hawk
44th Fighter Squadron – F-15C/D Eagle
67th Fighter Squadron – F-15C/D Eagle
623rd Air Control Squadron
909th Air Refueling Squadron – KC-135R Stratotanker
961st Airborne Air Control Squadron – E-3B/C Sentry
18th Civil Engineer Group
18th Civil Engineer Squadron
718th Civil Engineer Squadron
18th Maintenance Group
18th Aircraft Maintenance Squadron
18th Component Maintenance Squadron
18th Equipment Maintenance Squadron
18th Munitions Squadron
718th Aircraft Maintenance Squadron
18th Medical Group
18th Aerospace Medicine Squadron
18th Dental Squadron
18th Medical Operations Squadron
18th Medical Support Squadron
18th Mission Support Group
18th Contracting Squadron
18th Communications Squadron
18th Force Support Squadron
18th Logistics Readiness Squadron
18th Security Forces Squadron
718th Force Support Squadron
18th Air Expeditionary Wing
5th Expeditionary Airborne Command and Control Squadron – E-8C J-STARS
Pacific Air Forces Air Postal Squadron
Seventh Air Force
554th RED HORSE Squadron
Detachment 1 (GSU)
Air Force Special Operations Command (AFSOC)
353rd Special Operations Group
1st Special Operations Squadron – MC-130J Commando II
21st Special Operations Squadron – CV-22B Osprey
320th Special Tactics Squadron
353rd Special Operations Support Squadron
353rd Special Operations Aircraft Maintenance Squadron
753rd Special Operations Aircraft Maintenance Squadron
Air Mobility Command (AMC)
55th Wing
55th Operations Group
82nd Reconnaissance Squadron (GSU) – RC-135
390th Intelligence Squadron (GSU)
United States Air Force Expeditionary Center
515th Air Mobility Operations Wing
515th Air Mobility Operations Group
733rd Air Mobility Squadron (GSU)
Air Combat Command (ACC)
Sixteenth Air Force
363rd Intelligence, Surveillance and Reconnaissance Wing
361st Intelligence, Surveillance and Reconnaissance Group
43rd Intelligence Squadron
Detachment 1 (GSU)
United States Army
U.S. Army Pacific (USARPAC)
94th Army Air and Missile Defense Command
38th Air Defense Artillery Brigade
1st Battalion, 1st Air Defense Artillery Regiment – MIM-104 Patriot
United States Marine Corps
Marine Forces Pacific (MARFORPAC)
III Marine Expeditionary Force
1st Marine Aircraft Wing
Marine Wing Liaison Kadena
Naval Communications Detachment Okinawa
The mission of NAVCOMM Det Okinawa is to provide communications support for the Seventh Fleet and supporting units, U.S. Naval Forces Japan, U.S. Naval Forces Korea, Defense Information Systems Agency and the Japanese Maritime Self Defense Force. The detachment has four work centers:
TSCCOMM provides telecommunications support for Patrol Wing ONE Det Kadena, deployed patrol squadrons and Marine Wing Detachment
CMS provides communications security (COMSEC) materials and cryptographic equipment to Patrol Squadrons and detachments, and to Commander Amphibious Group One/CTF76, located at White Beach
Naval Radio Transmitter Facility (NRTF) Awase provides HF transmitter support to the fleet and area commanders and LF transmitter support for submarines operating in the Pacific and Indian Oceans
SURTASS supports command and control functions to SURTASS ships operating in the Indian Ocean and Western Pacific.
Major commands to which assigned
Tenth United States Army, 1 April 1945
Eighth Air Force, 16 July 1945
Pacific Air Command, United States Army (PACUSA), 6 December 1945
Redesignated: Far East Air Force, 1 January 1947
Redesignated: Pacific Air Forces, 1 July 1957
Base facilities
Gate 5 Park
Kadena Passenger Terminal
Kadena Base Exchange
Schilling Community Center
Rocker Enlisted Club
Officers Club
EDIS (Early Developmental Intervention Services)
Kadena Aeroclub
Banyan Tree Golf Course
Jack's Place Restaurant (originally Skoshi KOOM – Kadena Officers Open Mess)
Kadena High School
Kadena Middle School
Kadena Elementary School
Bob Hope Primary School
Ryukyu Middle School
Amelia Earhart Intermediate School
Stearley Heights Elementary School
The Asian Division of University of Maryland University College (UMUC)
Kadena Commissary
Environmental concerns
In June 2013, the government of Japan discovered 22 barrels buried on former base property that tests showed had previously contained dioxins and herbicides. Tests on the surrounding soils found dioxin levels at 8.4 times and groundwater at 280 times the legal limit. The land in question is a soccer field bordering the base's Bob Hope Primary School and Amelia Earhart Intermediate School. Angry parents accused base officials, under base commanders Brigadier General Matt H. Molloy and Brigadier General James B. Hecker, of failing to notify them of the toxins near the school and not investigating into the matter. The parents established a Facebook group on 10 January 2014 titled, "Bob Hope/AEIS – Protect Our Kids." After the issue was reported in the Japan Times and Stars and Stripes, USAF officials tested the soil and water at the schools and said that no excessive toxic substances were found.
Soil on the base tested positive for very high levels of polychlorinated biphenylchemicals (PCBs), in the thousands of parts per million, much higher than most other contamination sites in the world, according to a report issued in 1987 after an investigation prompted by a small unrelated spill of transformer oil.
Accidents and incidents
30 June 1959: an F-100 from the wing crashed on Okinawa during a training flight after suffering an engine fire. The pilot successfully ejected and suffered no harm, but the aircraft crashed into a local elementary school, killing 11 students plus six residents of the nearby neighborhood, and injuring 210.
19 November 1968: B-52 of the 4252d Strategic Wing broke up and caught fire after the aircraft aborted takeoff on an Arc Light bombing mission to South Vietnam. 2 crewmen died of their injuries.
2 November 1987: RF-4C 66-0416 (15 TRS / 18 TFW) entered a spin at 16,500 feet in a Whiskey area approximately 95 miles Northeast of Kadena. Both crew members ejected. One crewmembers body was never recovered. The other crew member survived.
28 May 2013: F-15C of the 44th Fighter Squadron crashed into the ocean off Okinawa. The pilot ejected and was rescued by the Air Rescue Wing Naha Detachment of the Japan Air Self-Defense Force.
11 June 2018: F-15C from the 44th Fighter Squadron crashed into the sea off Okinawa. The pilot was rescued by the JASDF Air Rescue Wing Naha Detachment.
Notes
Bibliography
Fletcher, Harry R. (1989) Air Force Bases Volume II, Active Air Force Bases outside the United States of America on 17 September 1982. Maxwell AFB, Alabama: Office of Air Force History.
Martin, Patrick (1994). Tail Code: The Complete History of USAF Tactical Aircraft Tail Code Markings. Schiffer Military Aviation History. .
Maurer, Maurer. Air Force Combat Units of World War II. Washington, D.C.: U.S. Government Printing Office 1961 (republished 1983, Office of Air Force History, ).
Rogers, Brian (2005). United States Air Force Unit Designations Since 1978. Hinkley, England: Midland Publications. .
External links
Official site
globalsecurity.org on Kadena
498th Tactical Missile Group at Kadena
1-1 ADA Battalion's Official Homepage
静かな夜を返せ!/未明離陸に抗議集会
Airfields of the United States Army Air Forces in Occupied Japan
Airports in Okinawa
Installations of the United States Air Force in Japan
Ryukyu Islands
Strategic Air Command military installations
United States Armed Forces in Okinawa Prefecture
Airfields of the United States Army Air Forces Air Transport Command in the Pacific Ocean Theater
Military airbases established in 1945 |
36551433 | https://en.wikipedia.org/wiki/Ping%20Fu | Ping Fu | Ping Fu (born 1958) is a Chinese-American entrepreneur. She is the co-founder of 3D software development company Geomagic, and was its chief executive officer until February 2013 when the company was acquired by 3D Systems Inc. , she is the Vice President and Chief Entrepreneur Officer at 3D Systems. Fu grew up in China during the Cultural Revolution and moved to the United States in 1984. She co-founded Geomagic in 1997 with her then-husband Herbert Edelsbrunner, and has been recognized for her achievements with the company through a number of awards, including being named Inc. magazine's 2005 "Entrepreneur of the Year". In 2013, she published her memoir, Bend, Not Break, co-authored with MeiMei Fox.
Early life and education
Ping Fu was born in 1958 in Nanjing, China, where her father was a professor at the Nanjing University of Aeronautics and Astronautics (NUAA). Fu spent her childhood and early adulthood in China. She grew up during the Cultural Revolution, during which she was separated from both her parents for several years. After the end of the Cultural Revolution, she attended the college that later became the Suzhou University studying Chinese literature. Fu has related in interviews and in her memoir that she chose to research China's one-child policy for her thesis and traveled to the countryside, where she found that infanticide of female infants was common, as was abortion, even late into pregnancy. Fu said that, after turning in her research, she believes it was passed to a newspaper editor who wrote an editorial on the infanticide of female children. Fu has stated that she was later briefly imprisoned by government officials and was told to leave the country. After this event, she left school, without graduating.
Fu left China and arrived in the United States in January 1984. She initially enrolled at the University of New Mexico (UNM) in Albuquerque but later moved to San Diego to study computer science as an undergraduate at the University of California, San Diego. During her time in San Diego, Fu worked part-time at a software company called Resource Systems Group as a programmer and database software consultant. Following her graduation from UC San Diego with a bachelor's degree in computer science, she moved to Illinois, where she took a job with Bell Labs. The company offered a Ph.D assistance program, through which Fu enrolled in the computer science Ph.D program at the University of Illinois at Urbana-Champaign (UIUC). At UIUC she completed a master's degree in computer science.
Career
National Center for Supercomputing Applications
In the early 1990s, Fu began working at the National Center for Supercomputing Applications (NCSA) at UIUC. Her focus was on computer graphics and visualization, including projects such as developing the morphing software for animation of the liquid metal T-1000 robot in the film Terminator 2: Judgment Day. While at NCSA, she hired student researcher Marc Andreessen and was his supervisor on the project developing NCSA's Mosaic, an early multimedia web browser credited with popularizing the World Wide Web. According to her supervisor, Joseph Hardin, Fu was one of the managers involved in the discussions from which the idea for the browser was developed. In 1994 Ping took a temporary position at the Hong Kong University of Science and Technology, returning to NCSA in 1995.
Geomagic
In 1996, Marc Andreessen's success with his own company, Netscape, inspired UIUC to encourage entrepreneurship and Fu developed the idea for a company that would combine manufacturing and digital technology, including 3D modeling software, the concept of which she called the "Personal Factory". She founded Geomagic with her then-husband, Herbert Edelsbrunner, whose research formed the basis for the initial software to be developed by the company. In 1997, she left the NCSA to begin operations at Geomagic, taking on the role of CEO. The company was originally named Raindrop Geomagic and was based in Champaign-Urbana, Illinois. It was founded with the aim of developing 3D imaging software that could enable customized manufacturing using 3D printers. Initially, Fu and Edelsbrunner funded Geomagic themselves, along with investment from Fu's sister Hong and her husband, and later from a group of angel investors.
In 1999, Fu relocated Geomagic from Illinois to the Research Triangle Park, North Carolina. That year, Franklin Street Partners committed to invest $6.5 million in Geomagic. Fu then hired an experienced executive as CEO who ran Geomagic for two years before stepping down when the company was close to bankruptcy. Fu returned to the role of CEO in 2001, investing her own money into Geomagic and working without a salary in order to continue paying the company's employees. She was able to lead Geomagic back to stability, gaining a significant contract with Align Technology, and Geomagic returned to profitability over the following two years.
From 2001 to 2003, Geomagic's sales tripled under Fu's leadership. The company became known as a leader in digital shape sampling and processing. After she and Edelsbrunner divorced, he continued to serve as an advisor at Geomagic.
In February 2013, Fu sold Geomagic to 3D Systems Corporation, a 3D printing company. She became the Chief Strategy Officer and Vice-President of 3D Systems.
Other roles
In addition to leading Geomagic, Fu has held a number of advisory roles relating to technology and entrepreneurship and with charitable organizations. She has served on the U.S. National Advisory Council on Innovation and Entrepreneurship since 2010 and is also a member of the National Council on Women in Technology. In 2012, she was appointed to the board of the Long Now Foundation, a non-profit organization focused on long-term thinking and enduring technology. She also serves on the board of the Frank Hawkins Kenan Institute of Private Enterprise at the University of North Carolina, the board of Live Nation Entertainment and is an advisor at Modern Meadow, an organization focused on tissue engineering.
Memoir
On December 31, 2012, Fu published a memoir, Bend, Not Break: A Life in Two Worlds. Co-authored with MeiMei Fox, the book told the stories of her life, from her early childhood in China to her experiences as an entrepreneur, including founding and leading Geomagic. The book received positive reviews from outlets including The Wall Street Journal and Oprah.com.
Beginning in January 2013, commentors in the Amazon.com reviews for the memoir began posting critical reviews, accusing Fu of lying about events in her past. Around the same time, first in English, then in Chinese on Forbes China, Forbes published an interview with Fu that discussed Fu's memoir and her early life, which contained an inaccurate interpretation of where Ping Fu lived during the cultural revolution. Chinese netizens responded to the piece with criticisms regarding alleged fabrication of events and inconsistencies in media coverage of Fu, which raised questions in the media about the veracity of details included in the memoir. Chinese blogger Fang Zhouzi was among the critics and he later raised further questions and criticisms based on earlier media coverage of Fu. Following the initial criticisms from Fang Zhouzi and other critics, commentors appearing to be non-native English speakers knowledgeable about Chinese history posted hundreds of negative comments in the memoir's Amazon.com reviews, leading The Daily Beast and New York Times to conclude that Ping was the subject of an online attack.
Fu responded to the criticisms through a public statement, and a post on the Huffington Post website answering questions that were raised about her childhood, education and being forced to leave China. She acknowledged that there were some inaccuracies in the book. She also acknowledged that the Red Guard atrocity she related in the memoir and media interviews regarding a teacher being pulled apart by four horses may have been an emotional memory, the result of hearing tales of such barbarity in old China as a child and having nightmares about it, or seeing it in a movie, rather than actually seeing it. In response to questions about accuracy of details in the book, her publisher stated that the book is a memoir, rather than a journalistic account of the Cultural Revolution. Fu has said that a second print of the memoir will correct inaccuracies that have been pointed out.
Awards and recognitions
For her work with Geomagic, Fu has received a number of awards. In 2003 she was named the Ernst & Young "Entrepreneur of the Year" for the Carolinas and received the Entrepreneurial Inspiration Award from North Carolina's Council for Entrepreneurial Development. The following year, Fast Company named her a 2004 "Fast 50" winner. In 2005, Inc. magazine named Ping its "Entrepreneur of the Year".
The America China Business Women’s Alliance awarded Fu its "Business Innovation Award" in 2008 and she received the 2010 "Leadership Award" from the CAD Society. The next year, she was given a "Lifetime Achievement" award by the Triangle Business Journal. In 2011 she was given the William C. Friday Award at North Carolina State University, and in 2012, the United States Citizenship and Immigration Services named Fu as an "Outstanding American by Choice".
References
1958 births
Living people
American autobiographers
American women chief executives
American technology company founders
American women company founders
Businesspeople from Nanjing
Chinese refugees
Refugees in the United States
University of San Diego alumni
American women computer scientists
American computer scientists
Women Internet pioneers
21st-century American memoirists
21st-century American women writers
American women memoirists
American technology chief executives
Writers from Nanjing
Chinese emigrants to the United States |
28730 | https://en.wikipedia.org/wiki/Security%20engineering | Security engineering | Security engineering is the process of incorporating security controls into an information system so that the controls become an integral part of the system’s operational capabilities. It is similar to other systems engineering activities in that its primary motivation is to support the delivery of engineering solutions that satisfy pre-defined functional and user requirements, but it has the added dimension of preventing misuse and malicious behavior. Those constraints and restrictions are often asserted as a security policy.
In one form or another, security engineering has existed as an informal field of study for several centuries. For example, the fields of locksmithing and security printing have been around for many years. The concerns for modern security engineering and computer systems were first solidified in a RAND paper from 1967, "Security and Privacy in Computer Systems" by Willis H. Ware. This paper, later expanded in 1979, provided many of the fundamental information security concepts, labelled today as Cybersecurity, that impact modern computer systems, from cloud implementations to embedded IoT.
Recent catastrophic events, most notably 9/11, have made security engineering quickly become a rapidly-growing field. In fact, in a report completed in 2006, it was estimated that the global security industry was valued at US $150 billion.
Security engineering involves aspects of social science, psychology (such as designing a system to "fail well", instead of trying to eliminate all sources of error), and economics as well as physics, chemistry, mathematics, criminology architecture, and landscaping.
Some of the techniques used, such as fault tree analysis, are derived from safety engineering.
Other techniques such as cryptography were previously restricted to military applications. One of the pioneers of establishing security engineering as a formal field of study is Ross Anderson.
Qualifications
No single qualification exists to become a security engineer.
However, an undergraduate and/or graduate degree, often in computer science, computer engineering, or physical protection focused degrees such as Security Science, in combination with practical work experience (systems, network engineering, software development, physical protection system modelling etc.) most qualifies an individual to succeed in the field. Other degree qualifications with a security focus exist. Multiple certifications, such as the Certified Information Systems Security Professional, or Certified Physical Security Professional are available that may demonstrate expertise in the field. Regardless of the qualification, the course must include a knowledge base to diagnose the security system drivers, security theory and principles including defense in depth, protection in depth, situational crime prevention and crime prevention through environmental design to set the protection strategy (professional inference), and technical knowledge including physics and mathematics to design and commission the engineering treatment solution. A security engineer can also benefit from having knowledge in cyber security and information security. Any previous work experience related to privacy and computer science is also valued.
All of this knowledge must be braced by professional attributes including strong communication skills and high levels of literacy for engineering report writing. Security engineering also goes by the label Security Science.
Related-fields
Information security
See esp. Computer security
protecting data from unauthorized access, use, disclosure, destruction, modification, or disruption to access.
Physical security
deter attackers from accessing a facility, resource, or information stored on physical media.
Technical surveillance counter-measures
Economics of security
the economic aspects of economics of privacy and computer security.
Methodologies
Technological advances, principally in the field of computers, have now allowed the creation of far more complex systems, with new and complex security problems. Because modern systems cut across many areas of human endeavor, security engineers not only need consider the mathematical and physical properties of systems; they also need to consider attacks on the people who use and form parts of those systems using social engineering attacks. Secure systems have to resist not only technical attacks, but also coercion, fraud, and deception by confidence tricksters.
Web applications
According to the Microsoft Developer Network the patterns and practices of security engineering consist of the following activities:
Security Objectives
Security Design Guidelines
Security Modeling
Security Architecture and Design Review
Security Code Review
Security Testing
Security Tuning
Security Deployment Review
These activities are designed to help meet security objectives in the software life cycle.
Physical
Understanding of a typical threat and the usual risks to people and property.
Understanding the incentives created both by the threat and the countermeasures.
Understanding risk and threat analysis methodology and the benefits of an empirical study of the physical security of a facility.
Understanding how to apply the methodology to buildings, critical infrastructure, ports, public transport and other facilities/compounds.
Overview of common physical and technological methods of protection and understanding their roles in deterrence, detection and mitigation.
Determining and prioritizing security needs and aligning them with the perceived threats and the available budget.
Product
Product security engineering is security engineering applied specifically to the products that an organization creates, distributes, and/or sells. Product security engineering is distinct from corporate/enterprise security, which focuses on securing corporate networks and systems that an organization uses to conduct business.
Product security includes security engineering applied to:
Hardware devices such as cell phones, computers, Internet of things devices, and cameras.
Software such as operating systems, applications, and firmware.
Such security engineers are often employed in separate teams from corporate security teams and work closely with product engineering teams.
Target hardening
Whatever the target, there are multiple ways of preventing penetration by unwanted or unauthorized persons. Methods include placing Jersey barriers, stairs or other sturdy obstacles outside tall or politically sensitive buildings to prevent car and truck bombings. Improving the method of visitor management and some new electronic locks take advantage of technologies such as fingerprint scanning, iris or retinal scanning, and voiceprint identification to authenticate users.
See also
Computer-related
Security Requirements Analysis
Secure coding
Security testing
Engineering Product Lifecycle
Authentication
Cryptanalysis
Data remanence
Defensive programming (secure coding)
Earthquake engineering
Explosion protection
Password policy
Security hacker
Software cracking
Software security assurance
Security pattern
Systems engineering
Trusted system
Economics of security
Physical
Access control
Authorization
Critical infrastructure protection
Environmental design (esp. CPTED)
Mantrap
Physical security
Secrecy
Secure cryptoprocessor
Security through obscurity
Misc. Topics
Full disclosure (computer security)
Security awareness
Security community
Steganography
Kerckhoffs's principle
References
Further reading
Articles and papers
patterns & practices Security Engineering on Channel9
patterns & practices Security Engineering on MSDN
patterns & practices Security Engineering Explained
Basic Target Hardening from the Government of South Australia |
18384321 | https://en.wikipedia.org/wiki/3tera | 3tera | 3tera, Inc., (founded in 2004) was a developer of system software for utility computing and cloud computing headquartered in Aliso Viejo, California. It was acquired by CA Technologies in 2010.
3tera is among the pioneers in the cloud computing space, having launched its AppLogic system in February 2006. Cloud computing is the set of technologies and business practices that enable companies of all sizes to build, deploy, monitor and scale applications using resources accessed over the internet. Web 2.0, SaaS, Enterprise and government users are adopting cloud computing because it eliminates capital investment in hardware and facilities as well as reduces operations labor.
On June 16, 2014, CA Technologies announced that AppLogic would reach its End of Life on June 30, 2015.
Cloud computing software
3tera's flagship product, AppLogic, is used by numerous service providers as the foundation for their cloud computing offerings.
AppLogic is a turnkey system that converts arrays of servers into virtualized resource pools that users can subscribe to, to power their applications. Users can define not only virtual machines but also complex application infrastructure like firewalls, VPNs, load balancers and storage, all with nothing more than a browser. Service providers offer a variety of services based on AppLogic, including:
Virtual Private Server (VPS) with automatic high availability and flexible resource assignment
Virtual Private Datacenter (VPD or VPDC) for building complex on-line application infrastructure with only a browser
Cloud data storage as well
Software-as-a-Service applications (including project management, issue tracking, and customer relationship management (CRM))
What generally distinguishes 3tera's model is that AppLogic is made available both as a service or as a software license. This allows customers to obtain service from a number of providers, move easily between providers, as well as operate in-house cloud computing services.
3Tera's principal founder was the late Vladimir Miloushev. Prior to its acquisition, the management and founders included Barry X Lynn, Chairman and Chief Executive; Peter Nickolov, President and Chief Technology Officer; Bert Armijo, Senior Vice President of Product Development; Eric Tessler, Vice President of Engineering, Essy Nickolova, Vice President of Marketing Communications, Robbie Vann-Adibè, Vice President of Sales; Krasimira Nikolova, chief financial officer.
References
Further reading
External links
Official Website
Software companies based in California
Cloud computing providers
Companies based in Aliso Viejo, California
CA Technologies
Defunct software companies of the United States |
28298865 | https://en.wikipedia.org/wiki/2010%20New%20Orleans%20Bowl | 2010 New Orleans Bowl | The 2010 New Orleans Bowl was the tenth edition of the New Orleans Bowl. The game was played at the Louisiana Superdome in New Orleans, Louisiana, on Saturday, December 18, 2010, at 9 p.m. Eastern Time. The contest was televised live on ESPN. The game featured the Ohio Bobcats of the Mid-American Conference versus the Troy Trojans from the Sun Belt Conference. Sponsored by R+L Carriers, the game was officially known as the R+L Carriers New Orleans Bowl.
Teams
Ohio Bobcats
The Bobcats entered the New Orleans Bowl with a record of 8–4. The team was 1 win away from playing in the MAC Championship game before losing their final game of the season to Kent State. Ohio made its first appearance in the New Orleans Bowl. The Bobcats appeared in their 3rd bowl game under head coach Frank Solich, however they were 0–2 in the two prior games, and Ohio had never won a bowl game in school history in four attempts. They were a 21–17 loser to Marshall in the 2009 Little Caesars Pizza Bowl.
Troy Trojans
Troy entered the bowl game with a 7–5 overall record and won a share of the Sun Belt Conference Championship. The Trojans played in their third straight bowl game. This will also mark the third time in five seasons that Troy will be playing in the New Orleans Bowl. They are currently 1–1 in New Orleans Bowl games with a win over Rice in 2006 and a loss to Southern Miss in 2008. Last season Troy was defeated by Central Michigan by a score of 44–41 in the GMAC Bowl.
Scoring summary
Source
Statistics
Notes
The 2010 New Orleans Bowl marked the first time that the 2 schools played each other in the history of their programs.
References
New Orleans Bowl
New Orleans Bowl
Ohio Bobcats football bowl games
Troy Trojans football bowl games
2010 in sports in Louisiana |
62858858 | https://en.wikipedia.org/wiki/Rajabazar%20Science%20College | Rajabazar Science College | The Rashbehari Siksha Prangan (commonly known as Rajabazar Science College) is a university campus, one of five main campuses of the University of Calcutta (CU). The college served as the cradle of Indian Sciences by winning the Nobel Prize in Physics in 1930 and many fellowships of the Royal Society London.
History
Despite the fact that the Presidency College at Calcutta witnessed great scientific research by Jagadish Chandra Bose and Prafulla Chandra Ray in the last decade of the 19th and the first decade of the 20th century, first organised scientific research at Calcutta University began with the establishment of its University College of Science and Technology in March 1914. A galaxy of Indian scientists joined the Chairs set up with the income of the endowments of nearly thirty seven and half lakhs rupees donated by Sir Taraknath Palit and Sir Rashbehari Ghosh who were associated with the National Education Movement in Bengal since the Indian Universities Act of 1904-5 and the partition of Bengal in 1905.
The first group of faculties included Acharya Prafulla Chandra Ray, Chandrasekhara Venkata Raman, Ganesh Prasad, Sisir Kumar Mitra and the legendary 1915 M.Sc. batch comprising Satyendranath Bose, Meghnad Saha, Jnan Chandra Ghosh, Jnanendra Nath Mukherjee among others. The holders of the Chairs often worked under great financial constraints and with whatever apparatus was locally available in Calcutta. Yet their scientific research was soon to put Calcutta University on the map of World Recognition. The science that developed at Calcutta University in colonial India was not a colonial science, as it was hardly supported by any large scale imperial funds and was solely meant for "The promotion and diffusion of scientific and technical education for the cultivation and advancement of science, both pure and applied among Indians".
Science in Calcutta University before 1914
One reason why Calcutta University could not develop a research programme in science earlier than 1914 was partly because of a great handicap inherent in its constitution and partly because of the paucity of funds needed for the development of such an institute at the post-graduation level. Till the end of 19th century Calcutta University remained mainly an examination body fed by a number of affiliated colleges which actually did teaching and which were dispersed from Shimla and Mussorie to Indore and Jaipur, and from Jaffna and Batticaloa to Sylhet and Chittagong. In these colleges there was hardly any provision for teaching of science courses with the exception of three colleges each in Civil Engineering and Medicine, all the 85 colleges in British India by 1882 were teaching courses in liberal arts leading to matriculation, F.A., B.A., Honours and M.A. Career in India was never virtually open to talents, through the principle had been asserted time and again in the Charter Act of 1883 and the Queen's Proclamation of 1858 after the Mutiny to allay fear, suspecion and distrust. In those days agriculture, manufacturing and commerce offered little to no incentives and was almost impossible to initiate without proper skills, capital and equality of terms with which it could compete with the European industry. Such discontenment among the educated unemployed gave rise to militant nationalism threatening the existence of very British Raj in India. Higher education in India was singled out as the root for all evils. So to curb this growing distress, Lord Curzon (then Viceroy of India) passed the Indian University Act in 1904 based on the recommendation of the Indian University Commission in 1902. While the various recommendations of the Commission to enable the Raj to control higher education in India were not strictly relevant here, some at least relating to the creation of new courses in science became important. The commission for the first time suggested the creation of the Faculty of Science and Technology in addition to the faculties of Arts, Law, Medicine and Civil Engineering previously offered by Calcutta University. Thus Bachelor of Science(B.Sc.) and Master of Science(M.Sc.) courses were introduced for first time in India. The commission also recommended the award of Doctor of Science(D.Sc.) degree which is to be given to a M.Sc. after some years spent in original investigations.
When Sir Ashutosh Mookerjee became the Vice Chancellor of Calcutta University in 1906, seized the Indian University Act of 1904-5 and converted the university from an examination body to a teaching university which will not only start post graduate degrees in Humnanities, English, Sanskrit, Pali, Arabic, Persian, Mental & Moral Phyilosophy, History, Economics and Mathematics but also establish chairs in some of them with financial support from the government by 1912. But for seven years, Mookerjee struggled hard to establish postgraduate teaching and research in Science and Technology despite his best intentions. The paucity of funds and accommodations were deplorable which meant that even if there were men, there was no accommodation laboratories, workshops, museums, equipments etc. In Presidency College a small room measuring 35'6" by 25'6" was available to be used as a laboratory room, preparation room and as well as a practical room for biology and physiology classes, whereas a large number of B.A. students did not get the opportunity of having a regular course of practical training.
Establishment of the University College of Science and Technology
In these circumstances, Calcutta University was pleasantly surprised to receive the princely gifts of Sir Taraknath Palit, an eminent barrister and advocate of national education during the Anti-Partition movement(1905). In June and October 1912, he donated total assets of fourteen and a half lakh rupees which included his own dwelling house. His donations were made over to the university for the advancement of Science and Technology and were used to maintain the income of endowment of Physics and Chemistry Chair and also to institute scholarships to distinguished graduates of Calcutta University for higher studies. The university had to provide "from its own funds" suitable lecture rooms, libraries, museums, laboratories, workshops and other facilities for teaching and research. As the funds provided by the University was not fully adequate, Mookerjee approached the Government of India for financial support, which was rejected by Henry Sharp who was the joint secretary in department of education. The opposition of Sharp mainly emanated from his dislike of Calcutta University which he considered will become a non-political body with strong prejudice against the white men and the Europeans. He said, "To give this money to this place is to give money to the cause which will embarrass ourselves. The money will go to political ends rather than to truly educational ends."
In August 1913, Sir Rashbehari Ghosh, an eminent jurist and scholar, in a letter to Asutosh Mookerjee placed in the hands of the University "a sum of ten lakh rupees" as per the conditions of his gift, there were to be established four chairs - one each for Applied Mathematics, Physics, Chemistry and Botany, with special reference to Agriculture, and eight studentships to be awarded to the distinguished graduates of this University "to carry on investigation" under the guidance of a professor. Both Palit and Ghosh wanted promotion and diffusion of scientific and technical education among their countrymen by indigenous agency and with the money now available through their endowments, Mookerjee could now launch his projected dream. Thus four days before the expiry of the fourth term of his Vice-Chancellorship, Mookerjee laid the founding stone of the University College of Science and Technology on 27 March 1914 hoping fervently that "although the College of Science and Technology is an integral part of the University of Calcutta, it will be regarded not as a provincial but as an all-India college of Science and Technology to which students will flock from every corner of the Indian Empire, attracted by the excellence of the instruction imparted and of the facilities provided for research."
The influence of the teachings of Raja Rammohan Roy on the importance of education broadly based on science and technology impressed a group of newly graduated scientists S.N. Bose, Jnan Chandra Ghosh, M.N. Saha, N.R. Sen, P.C. Mahalanobis, S.K. Mitra and Jnanendranath Mukherjee. Sir Asutosh Mukherjee believed in young talents, selected promising young men and appointed them straightaway as lecturers in post graduate classes of newly formed Sc. College and provided them research facilities also. He was the first and perhaps the last to establish a centre of academic studies at the University stage with distinct atmosphere of learning and scholarships and appointed some of the best men available in the country as University Professor or Lecturers.
The establishment of the Calcutta University College of Science and Technology signals the beginning of outstanding research in some branches of science and applied science, which put India on the map of World Recognnition. A galaxy of Indian Scientists joined the departments set up with the income of edowments donated by Palit and Ghosh and began their work in Science with whatever apparatus available in India and soon made their mark in their chosen fields. Palit Chairs in Physics and Chemistry and Ghosh Chairs in Applied Physics, Chemistry, Mathematics and Botany were soon filled up after the formal establishment of the University College of Science and Technology in March 1914.
Keys to the Start of Outstanding Research in "Non-Colonial" Science in Colonial India
The establishment of Chairs in Chemistry, Physics, Mathematics, Zoology and Botany was the start of outstanding research and teaching by dedicated Indian Scientists. Sir Asutosh Mookerjee who became the first president of the first session of the Indian Science Congress at Calcutta, where nearly a hundred scientists met on 26 January 1914, took great care to identify the right talent for the right post from different parts of India in science as in humanities and was able to attribute to the "University College of Science and Technology" as a true national character. No nation could live solely upon the achievements of its past or upon its borrowing from others, and at the same time hoped to retain its place among the great people of the Earth. Besides advancing the frontiers of knowledge, the work by the Indian Scientists at the University College of Science and Technology not only helped in increasing the wealth of the country but also succeeded in drawing attention of the scientific world. The dedication and devotion with which the Indian Scientists at the University College of Science and Technology began their work to explain the many unknown phenomena can only remind us of the zeal and enthusiasm with which William Jones and his choice band of thirty elite Englishmen in 1783-84 had begun their investigations into "the history and antiquities, arts, sciences and literature of Asia."
Despite the fact that the University College of Science and Technology was a department of Calcutta University set up as a part of the colonial educational despatch of 1854. there had been no substantial financial support from the British Raj to encourage the Indian Scientists in their works presumably under the idea that Indian brains were not suitable for scientific research despite great promise shown by Jagadish Chandra Bose and Prafulla Chandra Ray at the Presidency College years before the foundation of the University College of Science and Technology. Of the total expenditure of rupees 1,813,959 of the University College of Science and Technology between March 1914 and March 1922, the Government of India's contributions from public funds was a meagre rupees 1, 20, 000 only. Yet in spite of these financial contraints and difficulties none of the scientists left for better position in imperial organisations. Instead classic example was provided by Prafulla Chandra Ray who when re-appointed Palit Professor for 5 years after reaching the age of superannuation donated his full monthly salary for the entire period for the special benefit of his department which was "proud to acknowledge him as its leader". During this difficult time, there had been "a steady output of original work rapidly increasing in volume and improving in quality which emanated not from one or two extraordinarily isolated or exceptionally gifted workers blessed with special advantages and facilities, but from a large body of able and devoted scholars". No doubt, the scientists at the University College of Science and Technology sowed the sowed the seeds of many a promising project which were to bear fruits in the post-independence years. The University College of Science and Technology was indeed as oasis of scienctific research in India since 1914.
Sir C.V. Raman, A Palit Professor of Sc. College made this revolutionary discovery on the "Scattering of Light" which is known as The Raman Effect. He announced his discovery on February 28, 1928 and was awarded the Nobel Prize in Physics, the day widely celebrated as the National Science Day in India. Sir Jnan Chandra Ghosh became the first director of the newly created IIT in 1951 (Indian Institute of Technology Kharagpur). Sir J.C. Ghosh also was the second person to be associated with Sc. College (after Sir CV Raman) who became the director of IISc (Indian Institute of Science). Prof. A.P.C Ray founded the Bengal Chemicals and Pharmaceutical Work and he along with prof. Hemendra Kumar Sen established the department of Applied Chemistry in Sc. College in 1920. Prof. S.K. Mitra, the pioneer of Radio Science in India founded the department of Radio Physics and Electronics which Dr. Bidhan Chandra Roy (then chief minister of Bengal) laid the founding stone of, on April 21, 1949.
Notable scholars associated with Science college include one Nobel Laureate in Physics, two National Award winning "Film Directors", at least twenty-five Shanti Swarup Bhatnagar Laureates, six Royal Society of London Fellows and several British Knighthood title holders and Padma Awardees.
Academics
Rankings
Departments
Science college campus houses few of country's oldest applied science departments like applied physics, applied chemistry and Institute of Radio-Physics and Electronics. The college is also responsible for introducing degrees like M.Sc. and D.Sc. for first time in India. The Faculty of Engineering and Technology arm of Science college is the second oldest in West Bengal after IIEST Shibpur(Formerly B.E. College affiliated to Calcutta University).
The departments are as follows:
Radio Physics & Electronics (Electronics and Communication Engineering)
Applied Physics (Electrical Engineering and Instrumentation Engineering)
Applied Optics and Photonics (Optics and Optoelectronics Engineering)
Applied Chemistry (Polymer Science & Technology, Chemical Engineering and Chemical Technology)
Applied Mathematics
Physics
Chemistry
Electronic Science
Psychology & Applied Psychology
Physiology
Biophysics, Molecular Biology & Bioinformatics.
From 2015 onwards, the department of CSE (Computer Science and Engineering), A.K. Chowdhury School of IT (Information Technology) and OOE (Optics and Optoelectronics Engineering) has been shifted to the Technology Campus in Saltlake.
Degrees being offered are:
D.Sc.
Ph.D.(Tech)
M.Tech.
B.Tech.
M.Sc.
M.Phil..
Research
The University College of Science and Technology has 172 principal investigators and around 1,500 research students.
On average, 250 PhDs are awarded every year and around 750 research papers were published in 2011. Fifteen national and international patents have been filed so far by science and technology departments of chemical engineering, polymer science, radio-physics and electronics, applied physics, physiology, biotechnology and biochemistry. The proportion of teaching activity and research at the university is “50:50”. The University has signed high number of MoU with different institutes across the globe.
The university is currently linked with over 50 institutions of higher learning across the globe and the UK-India Education Research Initiative awarded £30,000 to CU's department of radio physics and the University of Sheffield for collaborative research on photonics and biomedical applications. The track record of Calcutta University science graduates at all-India competitive exams such as UGC-CSIR NET (Centre for Science and Industrial Research National Eligibility Test) and Jest (Joint Entrance Screening Test) for admission to premier research institutes in India has also been commendable. And CU alumni are at the helm in several Indian research organisations.
The Technology Faculty of this University is supported by TEQIP Phase 3 project of MHRD and has been working as the mentoring institute for Indira Gandhi Institute of Technology, Sarang, Odisha and Jorhat Engineering College, Assam. The University also received fund of INR 5 Crore from Rashtriya Uchchatar Shiksha Abhiyan (RUSA) for Infrastructure development. After successful completion the university received another fund of INR 50 Crore from RUSA 2.0 for project (INR 35 Crore) and Manpower & Entrepreneurship Development for project (INR 15 Crore). Different department of the University have received many funds from India and abroad for conducting research, manpower development etc. The university has many collaborations with industrial bodies for conducting joint research.
Based on research publication output in SCOPUS International Data for a period of 10-years University of Calcutta/ Rajabazar Sc. College along with CRNN(Centre for Research in Nanoscience and Nanotechnology) has been granted funds by several government agencies under R&D Infrastructure Division of Ministry of Science and Technology like DST-PURSE (Promotion of University Research and Scientific Excellence) and FIST (Fund for Improvement of Science and Technology)
Notable alumni
Bibha Chowdhuri (Alumni of Physics Department and she was the only woman to complete an M.Sc. degree in the year 1936. She worked on particle physics and cosmic rays.)
Asima Chatterjee (Alumni and later Khaira Professor of Chemistry Department. She was the first woman to receive a Doctorate of Science from an Indian university. She held office as a member of Rajya Sabha, president of Indian Science Congress Association and was appointed for the Padma Bhushan, Shanti Swarup Bhatnagar Prize for Science and Technology and D. Sc. (honoris causa) degree by Calcutta University.)
Purnima Sinha (Alumni of Physics department and was one of the first Bengali Women to receive a doctorate in Physics from an Indian University.)
Satyendra Nath Bose (Alumni and later Professor of the physics department. Fellow of Royal Society known for Bose-Einstein Condensate and Bosons)
Meghnad Saha (Alumni and later Khaira Professor of Physics department, fellow of Royal Society known for Saha Ionisation Equation and is regarded as father of Modern Astrophysics.)
Prasanta Chandra Mahalanobis (Alumni, fellow of Royal Society and founder of Indian Statistical Institute Kolkata)
Anadi Sankar Gupta (Alumni and Professor of Applied Mathematics Department, an Indian Mathematician, S.S. Bhatnagar Awardee, INSA Senior Scientist and Emeritus Professor of IIT Kharagpur.)
Sisir Kumar Mitra (Alumni of Physics Department, fellow of Royal Society, founder/ HOD of radio physics department and father of radio science in India)
Jnanchandra Ghosh (Alumni and later Professor of Chemistry, founder of IIT Model and first director of IIT Kharagpur.)
Jnanendranath Mukherjee (Alumni and later Professor of Chemistry department, Padma Bhusan Awardee, Commander of Order of British Empire, fellow of Royal Society and Ex-General President of ISA, 1952)
Mrinal Sen (Alumni of Physics department, Indian film director. He has won National Film Awards 18 times along with French Ordre des Arts et des Lettres 1985, the Padma Bhushan in 2008 and the Order of Friendship in 2005 and the Dadasaheb Phalke Award 2005 for his contribution to Indian cinema.)
Amal Kumar Raychaudhuri (Alumni of Physics Department, An Indian Physicist known for his most significant contribution Raychaudhuri equation Raychaudhuri equation.)
Chanchal Kumar Majumdar (Alumni of Physics Department, A Condensed matter physicist and Founder Director of S. N. Bose National Centre for Basic Sciences Kolkata. He is an elected fellow of the Indian National Science Academy, the National Academy of Sciences, India and the Indian Academy of Sciences – as well a member of the New York Academy of Sciences and the American Physical Society.)
Mihir Chowdhury (Alumni of chemistry department and elected follow of the Indian National Science Academy, the Indian Academy of Sciences and recipient of Shanti Swarup Bhatnagar Prize for Science and Technology.)
Tapan Sinha (Alumni of physics department. He was one of most prominent Indian film directors of his time along with Satyajit Ray and Mrinal Sen. He have won numerous national and international awards.)
Mani Lal Bhaumik (Alumni of physics department, founder of immensely popular LASIK eye surgery. He is a bestselling author, celebrated physicist, entrepreneur and philanthropist.)
Bidyut Baran Chaudhuri (Alumni of radio physics department, A senior computer scientist at ISI Kolkata, A J.C. Bose Fellow, Life Fellow of IEEE "for contributions to pattern recognition, especially Indian language script OCR, document processing and natural language processing".)
Bikas Chakrabarti (Alumni of Physics department, An Indian Physicist and emeritus professor of physics at Saha Institute of Nuclear Physics.)
Sankar Kumar Pal (Alumni of radio physics and electronics department, Computer Scientist and AI Researcher, Ex-Director of Indian Statistical Institute Kolkata and Padma Shri Awardee of 2013. He is also a recipient of Shanti Swarup Bhatnagar Prize for Science and Technology.)
Biswarup Mukhopadhyaya (Alumni of Physics department, An India theoretical High energy physicist and a senior professor at Harish-Chandra Research Institute. He is an elected fellow of National Academy of Sciences and Shanti Swarup Bhatnagar Prize for Science and Technology Awardee 2003.)
Biman Bagchi (Alumni of Chemistry Department, A theoretical chemist and an Amrut Mody Professor at the Solid State and Structural Chemistry Unit of the Indian Institute of Science.)
Sanghamitra Bandyopadhyay (Alumni of computer science engineering department and present Director of Indian Statistical Institute Kolkata. She is also Shanti Swarup Bhatnagar Prize for Science and Technology, Infosys Awardee 2017 and TWAS Prize in 2018)
Kankan Bhattacharyya (Alumni of chemistry department, a modern non-linear laser spectroscopy scientist. He is the Ex-director and chair professor of Indian Association for the Cultivation of Science, Kolkata, the oldest centre for scientific research in Asia.)
Santanu Bhattacharya (Alumni of chemistry department, an Indian bio-organic chemist, a Professor at Indian Institute of Science and present Director of Indian Association for the Cultivation of Science.)
Anuradha Lohia (Alumni of physiology department, an Indian molecular parasitologist and present Vice Chancellor of Presidency University. She is also a fellow of Indian Academy of Sciences and an Ex-Chairperson of Bose Institute in Kolkata.)
Phoolan Prasad (Alumni of Applied Mathematics Department, 1983 Shanti Swarup Bhatnagar Prize for Science and Technology Awardee and Fellow of INSA. He is currently a professor at Department of Mathematics at Indian Institute of Science Bangalore.)
Indrani Bose (Alumni of Physics department and fellow of the Indian Academy of Sciences, Bangalore, the National Academy of Sciences, Allahabad and was the first recipient of the Stree Shakthi Science Samman award (2000). She is currently a senior professor at Bose Institute, Kolkata.)
Deb Shankar Ray (Alumni of Chemistry Department and fellow of INSA, Indian Academy of Sciences, West Bengal Academy of Science and Technology and recipient of Shanti Swarup Bhatnagar Prize for Science and Technology. He is currently a professor at department of physical chemistry at Indian Association for the Cultivation of Science, Kolkata.)
Bidyendu Mohan Deb (Alumni of chemistry department and elected follow of the International Union of Pure and Applied Chemistry, The World Academy of Sciences, Indian National Science Academy, the Indian Academy of Sciences and recipient of Shanti Swarup Bhatnagar Prize for Science and Technology. He is currently a professor at IISER Kolkata.)
Debashis Mukherjee (Alumni of chemistry department. He is a recipient of Humboldt Prize, Chemical Pioneer Award, Indian Academy of Sciences, Indian National Science Academy and Shanti Swarup Bhatnagar Prize for Science and Technology. He is currently Professor Emeritus at the Indian Association for the Cultivation of Science.)
Rahul Banerjee (Alumni of chemistry department and currently professor at IISER Kolkata. Banerjee is a fellow of Royal Society of Chemistry and Shanti Swarup Bhatnagar Prize for Science and Technology among many others.)
Anil Kumar Gain (Alumni of Applied Mathematics Department. He was an Indian Mathematician who worked closely with Ronald Fisher on Applied Statistics under the guidance of Henry Ellis Daniels, who was then President of the Royal Statistical Society.)
Animesh Chakravorty (Alumni of chemistry department and Ex-HOD of chemistry department of IIT Kanpur and IACS Kolkata and visiting professor at Texas A&M University. He is a recipient of Shanti Swarup Bhatnagar Prize for Science and Technology 1975 and TWAS Prize among other fellowships.)
Siva Brata Bhattacherjee (Alumni of Physics department and an X-ray crystallographer who worked with Physicist Satyendra Nath Bose. Since 1945 he was a Khaira Professor at Science College and was also a faculty member at University of Manchester)
Lilabati Bhattacharjee (Alumni of Physics Department and an Indian crystallographer who worked with Physicist Satyendra Nath Bose and known from contributions in fields like structural crystallography, optical transform methods, computer programming, phase transformations, crystal growth, topography and instrumentation.
Usha Ranjan Ghatak (Alumni of chemistry department, Ex-Director of Indian Association for the Cultivation of Science and elected fellow of Indian Academy of Sciences, Indian National Science Academy and the Shanti Swarup Bhatnagar Prize for Science and Technology.)
Partha Ghose (Alumni of Physics department and one of the last doctoral students of the then National Professor of Physics Satyendra Nath Bose. He is an Indian Physicist, Author, Philosopher, Musician and former professor at the S.N. Bose National Centre for Basic Sciences.
Asok Kumar Barua (Alumni of Physics department and Honorary Professor Emeritus at IIEST Shibpur. He is an Indian Condensed matter physicist whose main focus on research is in Optoelectronics.
Debatosh Guha (Alumni of Radio Physics and Electronics Department, an Indian Antenna Researcher and currently a Professor of the same Institute.)
Sushmita Mitra (Alumni of Computer Science Engineering Department and currently Head INAE Chair Professor at the Machine Intelligence Unit at Indian Statistical Institute, Kolkata.)
Aditi Sen De (Alumni of Applied mathematics department, An Indian Physicist and currently associate professor in quantum information and computation group at Harish-Chandra Research Institute, Allahabad.)
Kalobaran Maiti (Alumni of Physics department and recipient of Shanti Swarup Bhatnagar Prize for Science and Technology. He is currently a professor of condensed matter Physics at Tata Institute of Fundamental Research "TIFR".)
Ujjwal Maulik (Alumni of Computer Science Engineering Department, an IEEE and INAE Fellow and Professor and former Head, Computer Science Engineering Department, Jadavpur University.)
Akhil Ranjan Chakravarty (Alumni of Chemistry department, an Indian organic chemist and currently professor at Indian Institute of Science Bangalore.)
Rabindranath Mukherjee (Alumni of Chemistry department, an Indian chemistry chair professor of IIT Kanpur and Director of Indian Institute of Science Education and Research, Kolkata.)
Palash Baran Pal (Alumni of Physics department, An Indian Theoritical Physicist, Writer, Linguist, Poet and currently Emeritus Professor at Calcutta University.)
Sudhansu Datta Majumdar (Alumni of Physics Department and Faculty member at IIT Kharagpur. He was president of Calcutta Mathematical Society.)
Amar Nath Bhaduri (Alumni of chemistry department and an Indian Chemical Biologist known for his work on UDP-glucose 4-epimerase and Leishmania donovani. He was the director of Indian Institute of Chemical Biology "IICB".)
Samarendra Nath Roy (Alumni of Applied Mathematics Department, an Indian born American Mathematician and Applied Statistician well known for his pioneering contribution to multivariate statistical analysis.)
Rupamanjari Ghosh (Alumni of Physics Department, a professor of Physics Department at Jawaharlal Nehru University and present Vice Chancellor at Shiv Nadar University, Uttar Pradesh.)
Chitra Dutta (Alumni of Physics Department, an Indian Physicist who is also a Chief Scientist and Head of Structural Biology and Bioinformatics Division at Indian Institute of Chemical Biology.)
Maitree Bhattacharyya (Alumni of Physics Department, an Indian Physicist and currently professor of department of Biochemistry at University of Calcutta and worked as visiting scientist at University of California, San Diego.)
Moumita Dutta (Alumni of Applied Physics department, an Indian Physicist currently working at the Space Applications Centre "SAC", Indian Space Research Organisation "ISRO" - Ahmedabad as a scientist/engineer.)
Ashesh Prosad Mitra (Alumni of Physics Department, An Indian Physicist who headed the National Physics Laboratory in Delhi, India and was the Director General of the Council of Scientific and Industrial Research (CSIR).)
Samarendra Kumar Mitra (Alumni of both Chemistry and Applied Mathematics Department, He was an Indian Scientist who planned, designed and developed, in 1953, India's first computer (an electron-analog computer).
Raj Chandra Bose (Alumni of both pure and applied mathematics department, He was a well-known Mathematician and Statistician.)
Anish Deb (Alumni and later professor of Applied Physics Department, He was also a well-known science fiction Bengali writer.)
Notable faculty
Chandrasekhar Venkata Raman (Palit Professor of Physics Department, fellow of Royal Society and India's only Noble laureate in Physics.)
Acharya Prafulla Chandra Ray (Professor, fellow of Royal Society and founder of Bengal Chemicals and Pharmaceuticals)
Ganesh Prasad (Rashbehari Ghosh Professor of Applied Mathematics department and considered by mathematical community as Father of Mathematical Research in India.)
Debendra Mohan Bose (Rashbehary Ghose Professor of Physics 1919 and succeeded C.V. Raman as the Palit Professor of Physics)
Amitava Raychaudhuri (Professor Emeritus at the Physics Department and an Indian theoretical particle physicist. He is also an S.S. Bhatnagar Awardee, J.C. Bose, INSA & IAS fellow.)
References
External links
Rajabazar Science College
University of Calcutta
Engineering colleges in West Bengal
Universities and colleges in Kolkata
All India Council for Technical Education
Academic institutions associated with the Bengal Renaissance
Educational institutions established in 1914
1914 establishments in British India |
1909823 | https://en.wikipedia.org/wiki/CHKDSK | CHKDSK | In computing, CHKDSK (short for "check disk") is a system tool and command in DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows and related operating systems. It verifies the file system integrity of a volume and attempts to fix logical file system errors. It is similar to the fsck command in Unix and similar to Microsoft ScanDisk which co-existed with CHKDSK in Windows 9x and MS-DOS 6.x.
Implementations
An early implementation of a 'CheckDisk' was the CHECKDSK that was a part of Digital Equipment Corporation hardware's diagnostics, running on early 1970s TENEX and TOPS-20.
SCP 86-DOS
The CHKDSK command was first implemented in 1980 by Tim Paterson and included in Seattle Computer Products 86-DOS.
MS-DOS / IBM PC DOS
The command is available in MS-DOS versions 1 and later.
CHKDSK is implemented as an external command. MS-DOS versions 2.x - 4.x use chkdsk.com as the executable file. MS-DOS versions 5.x and later use chkdsk.exe as the executable file.
MS-DOS 5.0 bug
CHKDSK and UNDELETE in MS-DOS 5.0 have a bug which can corrupt data: If the file allocation table of a disk uses 256 sectors, running CHKDSK /F can cause data loss and running UNDELETE can cause unpredictable results. This normally affects disks with a capacity of approximately a multiple of 128 MB. This applies to CHKDSK.EXE and UNDELETE.EXE bearing a datestamp of April 9, 1991. This bug was fixed in MS-DOS 5.0a.
Microsoft Windows
CHKDSK can be run from DOS prompt, Windows Explorer, Windows Command Prompt, Windows PowerShell or Recovery Console.
On Windows NT operating systems, CHKDSK can also check the disk surface for bad sectors and mark them (in MS-DOS 6.x and Windows 9x, this is a task done by Microsoft ScanDisk). The Windows Server version of CHKDSK is RAID-aware and can fully recover data in bad sectors of a disk in a RAID-1 or RAID-5 array if other disks in the set are intact.
Fragments of files and directories deemed as corrupt as a result of, for example, power outages while writing, file name overlength, and/or invalid characters in file name, are moved into a directory under the partition's root, named found.000, and renamed into generic hexadecimally numbered files and directories starting with file00000000.chk and dir_00000000.chk respectively.
On Windows NT family, a standard CHKDSK scan consists of three phases of testing file metadata. It looks for errors but does not fix them unless it is explicitly ordered to do so. The same applies to surface scan—this test, which could be extremely time-consuming on large or low-performance disks, is not carried out unless explicitly requested. CHKDSK requires exclusive write access to the volume to perform repairs.
Due to the requirement of the monopolized access to the drive, the CHKDSK cannot check the system disk in the normal system mode. Instead, the system sets a dirty bit to the disk volume and then reboots the computer. During the Windows start-up, a special version of CHKDSK called Autochk (a native mode application) is started by the SMSS.EXE and checks and attempts repairing the file system if the dirty bit is set.
Because of the exclusive access requirement and the time-consuming nature of CHKDSK operation, Windows Vista implemented a new file system health model in which the operating system fixes errors on the volumes as it encounters them. In the event that the problem is grave and a full scan is required, Action Center notifies the user to take the volume offline at the first convenience.
Windows Vista and Windows Server 2008 added self-healing ability, turned on by default, in addition to providing the CHKDSK command. It detects physical file system errors and silently fixes them on the fly. Thus, many problems previously discovered on running CHKDSK never appear. It is administered by fsutil repair command.
Criticism has been aimed at the tendency of AUTOCHK to automatically modify the file system when not explicitly solicited by the user who may wish to back up their data in prior, as an attempted repair may scramble, undermine and disown file and directory paths, especially on a multiboot installation where multiple operating systems may have interferingly written to the same partition.
The alleged Windows 7 bug
Before the release of Windows 7, InfoWorld reported an alleged memory leak in CHKDSK; according to the report, the chkdsk /r command would cause the memory consumption to reach the maximum and the system to crash. Randall C. Kennedy of InfoWorld attributed the original report to "various Web sources" and said that in his tests, the memory consumption reached above 90%, although he did not experience a crash. Nevertheless, Kennedy took the memory consumption for a critical bug that would derail Windows 7's launch and chastised Microsoft. Tom Warren of Neowin dismissed Kennedy's assessment of the alleged leak's significance. Steven Sinofsky of Microsoft also responded that Microsoft could not reproduce a crash either but that the massive memory consumption was by design, to improve performance, and not a leak. Ed Bott of ZDNet also reviewed the claim with his own tests and observed that no crash would occur. Noting that chkdsk /r, by design, does not work on the system drive while Windows is online, Bott concluded "it’s arguably a feature, not a bug, and the likelihood that you’ll ever crash a system this way is very, very small and completely avoidable."
DR/Novell DOS
DR DOS 6.0 also includes an implementation of the command.
FreeDOS
The FreeDOS version was developed by Imre Leber and is licensed under the GNU GPL 2.
ReactOS
The ReactOS implementation is based on a free clone developed by Mark Russinovich for Sysinternals in 1998.
It was adapted to ReactOS by Emanuele Aliberti in 1999 and supports volumes using the FAT32 filesystem.
The command does not support volumes using the Btrfs filesystem, although ReactOS supports it since version 0.4.1.
See also
Defragmentation
Data scrubbing
List of file systems
e2fsprogs
References
Further reading
External links
Official documentation about Microsoft CHKDSK
Open source CHKDSK implementation that comes with MS-DOS v2.0
External DOS commands
Hard disk software
Microsoft free software
MSX-DOS commands
OS/2 commands |
14705292 | https://en.wikipedia.org/wiki/Floyd%27s%20triangle | Floyd's triangle | Floyd's triangle is a triangular array of natural numbers, used in computer science education. It is named after Robert Floyd. It is defined by filling the rows of the triangle with consecutive numbers, starting with a 1 in the top left corner:
The problem of writing a computer program to produce this triangle has been frequently used as an exercise or example for beginning computer programmers, covering the concepts of text formatting and simple loop constructs.
Properties
The numbers along the left edge of the triangle are the lazy caterer's sequence and the numbers along the right edge are the triangular numbers. The nth row sums to , the constant of an magic square .
Summing up the row sums in Floyd's triangle reveals the doubly triangular numbers, triangular numbers with an index that is triangular.
1 = 1 = T(T(1))
1 = 6 = T(T(2))
2 + 3
1
2 + 3 = 21 = T(T(3))
4 + 5 + 6
Each number in the triangle is smaller than the number below it by the index of its row.
See also
Pascal's triangle
References
External links
Floyd's triangle at Rosetta code
Triangles of numbers
Computer programming
Computer science education |
25818624 | https://en.wikipedia.org/wiki/Gulf%20War%20air%20campaign | Gulf War air campaign | The air campaign of the Gulf War, also known as the 1991 bombing of Iraq, was an extensive aerial bombing campaign from 17 January 1991 to 23 February 1991. The Coalition of the Gulf War flew over 100,000 sorties, dropping 88,500 tons of bombs, widely destroying military and civilian infrastructure. The air campaign was commanded by USAF Lieutenant General Chuck Horner, who briefly served as Commander-in-Chief—Forward of U.S. Central Command while General Schwarzkopf was still in the United States. The British air commanders were Air Vice-Marshal Andrew Wilson (to 17 November 1990) and Air Vice-Marshal Bill Wratten (from 17 November). The air campaign had largely finished by 23 February 1991 when the coalition invasion of Kuwait took place.
The initial strikes were carried out by Tomahawk cruise missiles launched from warships situated in the Persian Gulf, by F-117A Nighthawk stealth bombers with an armament of laser-guided smart bombs, and by F-4G Wild Weasel aircraft as well as F/A-18 Hornet aircraft armed with HARM (High Speed Anti-Radiation) anti-radar missiles. These first attacks allowed F-14, F-15, F-16, and F/A-18 fighter bombers to gain air superiority over Iraq and then continue to drop TGM-guided and laser-guided bombs.
Armed with a GAU-8 rotary cannon and infrared-imaging or optically guided Maverick missiles, A-10 Thunderbolts bombed and destroyed Iraqi armored forces, supporting the advance of US ground troops. Marine Corps close air support AV-8B Harriers employed their 25mm rotary cannon, Mavericks, cluster munitions, and napalm against the Iraqi dug-in forces to pave the way forward for the Marines breaching Saddam's defenses. The AH-64 Apache and AH-1 Cobra attack helicopters fired laser-guided Hellfire missiles and TOW missiles which were guided to tanks by ground observers or by scout helicopters, such as the OH-58D Kiowa. The Coalition air fleet also made use of the E-3A Airborne Warning and Control Systems and of a fleet of B-52 bombers.
Opposing Forces
Coalition Armed Forces
On the eve of Operation Desert Storm, the Coalition of the Gulf War numbered 2,430 fixed-wing aircraft in the Kuwaiti Theater of Operations (KTO), almost three-fourths of which belonged to the United States Armed Forces. When the ground assault began on 24 February, that number had increased to over 2,780. Representing a relatively high tooth-to-tail ratio, approximately 60 percent of Coalition aircraft were "shooters" or combat aircraft. The United States Air Force deployed over 1,300 aircraft during the course of the campaign, followed by the United States Navy with over 400 aircraft and the United States Marine Corps with approximately 240. Collectively, the other Coalition partners accounted for over 600 aircraft. Saudi Arabia, Kuwait, Bahrain, Qatar, and the United Arab Emirates all contributed air forces to the campaign, as did the United Kingdom (Operation Granby), France (Opération Daguet), Canada (Operation Friction) and Italy (Operazione Locusta). South Korea, Argentina and New Zealand provided a small number of transport aircraft, with South Korea, Kuwait, Italy and Japan also paying for the cost of 200 airlift flights into Saudi Arabia. Additionally, Germany, Belgium, Italy, the Netherlands and Luxembourg each sent a squadron of fighters as part of their NATO obligation to protect Turkey, although these aircraft were strictly defensive and did not take part in the campaign against Iraq.
In terms of quantity and quality, Coalition airpower was superior to its Iraqi counterpart. This was particularly the case in special capabilities which the Iraqis simply lacked, including aerial refueling, airborne command and control, electronic warfare, precision munitions and stealth aircraft. Such capabilities were primarily (if not exclusively) provided for by the United States. In space, sixteen military communications satellites (fourteen of which belonged to the United States) were supplemented with five commercial satellites to provide the vast majority of communication within the theater of operations. Combined they had a total transmission rate of 200 million bits per second, or equivalent to 39,000 simultaneous telephone calls. A range of other satellites provided additional intelligence-gathering services, including the Defense Support Program, Landsat program, SPOT, and six meteorological satellites.
One area where the Coalition was deficient was in tactical reconnaissance. Aircraft specializing in reconnaissance were reportedly given low priority due to lack of space and the belief that strategic platforms could take over their role, a belief which would prove misplaced. Efforts to compensate for this deficiency included using regular fighter aircraft in the reconnaissance role and RQ-2 Pioneer unmanned aerial vehicles. Deployed mainly by the US Marines, the RQ-2 was sufficient for certain missions but lacking in many respects compared to dedicated aircraft.
Iraqi Armed Forces
At the time of the Gulf War the Iraqi Air Force (IQAF) was the sixth largest in the world, consisting of over 750 fixed-wing combat aircraft operating out of 24 primary airfields, with 13 active dispersal fields and 19 additional dispersal fields. Iraq had also constructed 594 hardened aircraft shelters to house nearly its entire air force, protecting them from attack. Iraq similarly possessed an impressive amount of air defenses. Its inventory included 16,000 surface-to-air missiles total, both radar and infrared guided, with over 3,600 of these major missile systems. Up to 154 SAM sites and 18 SAM support facilities were located in Iraq, with another 20 or 21 sites in the Kuwati theater of operations (KTO). Iraq also possessed a large number of anti-aircraft artillery (AAA), with 972 AAA sites, 2,404 fixed AA guns and 6,100 mobile AA guns. Providing complete coverage of Iraqi airspace were 478 early warning radars, 75 high-frequency radars, and 154 acquisition radars.
Much of this equipment was combined into an integrated air defense system (IADS) overseen by Kari, an automated C2 computer system developed by Iraq and built by French contractors in the wake of Operation Opera (Kari in turn is the French spelling of Iraq backwards). Kari tied the entire IADS to a single location, the national Air Defense Operations Center (ADOC) located in an underground bunker in Baghdad, and in turn divided the country into four defense sectors each overseen by a Sector Operations Center (SOC) located at H-3, Kirkuk, Taji and Talil; a fifth SOC was added at Ali Al Salem to cover the recently conquered Kuwait. Each SOC oversaw the local airspace and commanded anywhere from two to five Intercept Operations Centers (IOCs) per sector. The IOCs were located in bunkers constructed at Iraqi Air Force bases and tied into local radar systems, whose information they could pass on to their SOC and thence on to Baghdad. In this way a SOC was capable of simultaneously tracking 120 aircraft and selecting for the appropriate weapon system to engage them. The SOC could automatically target for SA-2 and SA-3 SAM systems in their sector, which meant the SAMs did not have to turn on their own radar and reveal their position, or an IOC could direct local interceptors to engage the targets. Baghdad itself was one of the most heavily defended cities in the world—more heavily defended several times over than Hanoi during the Vietnam War—protected by 65% of Iraq's SAMs and over half of its AAA pieces.
Though impressive on paper, the Iraqi Air Force's primary role was to act as a regional deterrent, with a secondary role of supporting the Iraqi Army, rather than attempt to gain air superiority in any conflict. Basic training was rigid, inflexible, and left pilots with extremely poor situational awareness. Additional training was provided by the Soviet Union, with Mirage pilots attending courses in France. Soviet trainers generally passed everyone but assessed less than half of the IQAF students would have been accepted into Soviet fighter units. French training (which the Iraqis considered decidedly superior to Soviet) resulted in an 80% failure rate; nevertheless those who failed were qualified to fly upon return to Iraq. In all, a third of Iraqi pilots were deemed to meet the standards of Western pilots, and almost all of them lacked aggressiveness and were overly dependent on ground control to direct them to targets. In addition, the air force suffered from spare parts shortages and maintenance shortfalls, and a majority of their equipment like the MiG-21 were outdated and of dubious combat value. Only 170 aircraft like the MiG-29 and Mirage F1 were considered comparable to Coalition aircraft, with the MiG-29 being downgraded export models.
Likewise, Kari itself had a number of deficiencies of which Coalition air forces would take advantage. The system was primarily oriented towards defending against much smaller attacks from Iraq's most likely enemies—Iran, Syria and Israel—and focused on point defense rather than area defense. This meant there were significant gaps in its coverage, particularly on the orientation from Saudi Arabia straight to Baghdad, and attacking aircraft would be able to approach their target from multiple directions. Like its aircraft, much of Iraq's ground air defenses were also outdated: SA-2 and SA-3 systems were nearing the end of their operational lifespan and their countermeasures were well-known at this point, while its other SAM systems were not much younger. Furthermore, the IADS was centralized to a fault. Although each IOC was datalinked to their respective SOC and in turn back to the ADOC, the defense sectors couldn't share information between each other. If a SOC was knocked out of action the attached air defense weapons lost all ability to coordinate their response; its respective SAM batteries would be forced to rely on their own radar systems while most AAA lacked any radar guidance.
Main air campaign starts
A day after the deadline set in United Nations Security Council Resolution 678, the coalition launched a massive air campaign, which began the general offensive codenamed Operation Desert Storm with more than 1,000 sorties launching per day. It began on 17 January 1991, at 2:38 AM, Baghdad time, when Task Force Normandy (eight US Army AH-64 Apache helicopters led by four US Air Force MH-53 Pave Low helicopters) destroyed Iraqi radar sites near the Iraqi–Saudi Arabian border which could have warned Iraq of an upcoming attack.
At 2:43 A.M. two USAF EF-111 Ravens with terrain following radar led 22 USAF F-15E Strike Eagles against assaults on airfields in Western Iraq. Minutes later, one of the EF-111 crews—Captain James Denton and Captain Brent Brandon—destroyed an Iraqi Dassault Mirage F1, when their low altitude maneuvering led the F1 to crash to the ground.
At 3:00 AM, ten USAF F-117 Nighthawk stealth bombers, under the protection of a three-ship formation of EF-111s, bombed Baghdad, the capital. The striking force came under fire from 3,000 anti-aircraft guns on the ground.
Within hours of the start of the coalition air campaign, a P-3 Orion called Outlaw Hunter developed by the US Navy's Space and Naval Warfare Systems Command, which was testing a highly specialised over-the-horizon radar, detected a large number of Iraqi patrol boats and naval vessels attempting to make a run from Basra and Umm Qasr to Iranian waters. Outlaw Hunter vectored in strike elements, which attacked the Iraqi naval flotilla near Bubiyan Island, destroying eleven vessels and damaging scores more.
Concurrently, US Navy BGM-109 Tomahawk cruise missiles struck targets in Baghdad, and other coalition aircraft struck targets throughout Iraq. Government buildings, TV stations, airfields, presidential palaces, military installations, communication lines, supply bases, oil refineries, a Baghdad airport, electric powerplants and factories making Iraqi military equipment were all destroyed by massive aerial and missile attacks from coalition forces.
Five hours after the first attacks, Iraq's state radio broadcast a voice identified as Saddam Hussein declaring that "The great duel, the mother of all battles has begun. The dawn of victory nears as this great showdown begins."
The Gulf War is sometimes called the "computer war", due to the advanced computer-guided weapons and munitions used in the air campaign, which included precision-guided munitions and cruise missiles, even though these were very much in the minority when compared with "dumb bombs" used. Cluster munitions and BLU-82 "Daisy Cutters" were also used.
Iraq responded by launching eight Iraqi modified Scud missiles into Israel the next day. These missile attacks on Israel were to continue throughout the six weeks of the war.
On the first night of the war, two F/A-18s from the carrier USS Saratoga were flying outside of Baghdad when two Iraqi MiG-25s engaged them. In the beyond-visual-range (BVR) kill, an Iraqi MiG-25 piloted by Zuhair Dawood fired an R-40RD missile, shooting down an American F/A-18C Hornet and killing its pilot, Lieutenant Commander Scott Speicher.
In an effort to demonstrate their own air offensive capability, on 24 January the Iraqis attempted to mount a strike against the major Saudi oil refinery, Ras Tanura. Two Mirage F1 fighters laden with incendiary bombs and two MiG-23s (acting as fighter cover) took off from bases in Iraq. They were spotted by US AWACs, and two Royal Saudi Air Force F-15s were sent to intercept. When the Saudis appeared the Iraqi MiGs turned tail, but the Mirages pressed on. Captain Iyad Al-Shamrani, one of the Saudi pilots, maneuvered his jet behind the Mirages and shot down both aircraft. A few days later the Iraqis made their last true air offensive of the war, unsuccessfully attempting to shoot down F-15s patrolling the Iranian border. After this episode, the Iraqis made no more air efforts of their own, sending most of their jets to Iran in hopes that they might someday get their air force back.
The first priority for Coalition forces was the destruction of Iraqi command and control bunkers, Scud missile launch pads and storage areas, telecommunications and radio facilities, and airfields. The attack began with a wave of deep-penetrating aircraft – F-111s, F-15Es, Tornado GR1s, F-16s, A-6s, A-7Es, and F-117s, complemented by F-15C, F-14s and Air Defense Tornados. EA-6Bs, EF-111 radar jammers, and F-117A stealth planes were heavily used in this phase to elude Iraq's extensive SAM systems and anti-aircraft weapons. The sorties were launched mostly from Saudi Arabia and the six Coalition aircraft carrier battle groups (CVBG) in the Persian Gulf and Red Sea. During the initial 24 hours 2,775 sorties were flown, including seven B-52s which flew a 35-hour nonstop 14,000-mile round-trip from Barksdale Air Force Base and launched 35 AGM-86 CALCM cruise missiles against eight Iraqi targets.
Persian Gulf CVBGs included USS Midway, USS Theodore Roosevelt, and USS Ranger. USS America, USS John F. Kennedy, and USS Saratoga operated from the Red Sea (USS America transitioned to the Persian Gulf midway through the air war).
Wild Weasels were very effective; unlike North Vietnamese, Iraqi SAM operators did not turn radar off until just before launch. Antiaircraft defenses, including shoulder-launched ground-to-air missiles, were surprisingly ineffective against coalition aircraft and the coalition suffered only 75 aircraft losses in over 100,000 sorties, though only 42 of these were the result of Iraqi action. The other 33 were lost to accidents. In particular, RAF and US Navy aircraft which flew at low altitudes to avoid radar were particularly vulnerable, though this changed when the aircrews were ordered to fly above the AAA.
The next coalition targets were command and communication facilities. Saddam Hussein had closely micromanaged the Iraqi forces in the Iran–Iraq War, and initiative at lower levels was discouraged. Coalition planners hoped that Iraqi resistance would quickly collapse if deprived of command and control.
Some of Iraq's air force squadrons escape
The first week of the air war saw a few Iraqi sorties, but these did little damage, and 36 Iraqi fighter aircraft were shot down by Coalition planes. Soon after, the Iraqi Air Force began fleeing to Iran, with 115 to 140 aircraft flown there. This mass exodus of Iraqi aircraft took coalition forces by surprise as the Coalition had been expecting them to flee to Jordan, a nation friendly to Iraq, rather than Iran, a long-time enemy. As the purpose of the war was to destroy Iraq militarily, the coalition had placed aircraft over western Iraq to try to stop any retreat into Jordan. This meant they were unable to react before most of the Iraqi aircraft had made it "safely" to Iranian airbases. The coalition eventually established a virtual "wall" of F-15 Eagle, F-14 Tomcat fighters, and F-16 Fighting Falcons on the Iraq–Iran border (called MIGCAP), thereby stopping the exodus of fleeing Iraqi fighters. In response, the Iraqi Air Force launched Operation Samurra in an attempt to break the blockade imposed on them. The resulting air battle would be the last offensive action of the war for the Iraqi Air Force. Iran did not allow the aircrews to be released until years later. Iran held on to the Iraqi aircraft for over 20 years, returning 88 of them in 2014. However, many Iraqi planes remained in Iran, and several were destroyed by coalition forces.
Infrastructure bombing
The third and largest phase of the air campaign ostensibly targeted military targets throughout Iraq and Kuwait: Scud missile launchers, weapons research facilities, and naval forces. About one-third of the Coalition airpower was devoted to attacking Scuds, some of which were on trucks and therefore difficult to locate. Some U.S. and British special forces teams had been covertly inserted into western Iraq to aid in the search and destruction of Scuds. However, the lack of adequate terrain for concealment hindered their operations, and some of them were killed or captured such as occurred with the widely publicised Bravo Two Zero patrol of the SAS.
Civilian infrastructure
Coalition bombing raids destroyed Iraqi civilian infrastructure. 11 of Iraq's 20 major power stations and 119 substations were totally destroyed, while a further six major power stations were damaged. At the end of the war, electricity production was at four percent of its pre-war levels. Bombs destroyed the utility of all major dams, most major pumping stations, and many sewage treatment plants, telecommunications equipment, port facilities, oil refineries and distribution, railroads and bridges were also destroyed.
Iraqi targets were located by aerial photography and GPS coordinates. According to the non-fiction book, Armored Cav by Tom Clancy, in August 1990, a USAF senior officer arrived at Baghdad International Airport carrying a briefcase with a GPS receiver inside. After being taken to the U.S. embassy he took a single GPS reading in the courtyard of the complex. Upon return to the U.S. the coordinates were used as the basis for designating targets in Baghdad.
The U.S. bombed highways and bridges linking Jordan and Iraq, crippling infrastructure on both sides.
Civilian casualties
The U.S. government claimed the Iraqi government fabricated numerous attacks on Iraqi holy sites in order to rally the Muslim community. One such instance had Iraq reporting that coalition forces attacked the holy cities of Najaf and Karbala. The final number of Iraqi civilians killed was 2,278, while 5,965 were reported wounded.
On 13 February 1991, two laser-guided smart bombs destroyed the Amiriyah blockhouse, which was a civilian air-raid shelter, killing hundreds of civilians. U.S. officials claimed that the blockhouse was also a military communications centre. Jeremy Bowen, a BBC correspondent, was one of the first television reporters on the scene. Bowen was given access to the site and did not find evidence of military use. However, Wafiq al-Sammarrai, Chief of Iraqi General Military Intelligence would later assert that the shelter had been used by elements of the Iraqi Intelligence Service, and that it had been visited on occasion by Saddam Hussein himself. A day after the Amiriyah attack, a British warplane fired a laser-guided missile at a bridge in the Al-Fallujah neighborhood west of Baghdad. It missed and hit a residential area, killing up to 130 civilians. When friends and relatives rushed to the scene to assist the injured, British warplanes returned to bomb them, as well.
Vulnerability of Iraq to air attacks
The air campaign devastated entire Iraqi brigades deployed in the open desert in combat formation. It also prevented an effective Iraqi resupply of units engaged in combat, and prevented some 450,000 Iraqi troops from achieving a larger force concentration.
The air campaign had a significant effect on the tactics employed by opposing forces in subsequent conflicts. Entire Iraqi divisions were dug in the open while facing U.S. forces. They were not dispersed, as with the Yugoslav forces in Kosovo. Iraqi forces also tried to reduce the length of their supply lines and the total area defended.
Losses
An estimated 407 Iraqi aircraft were either destroyed or flown to Iran and permanently impounded there. During Desert Storm, 36 aircraft were shot down in aerial combat. Three helicopters and 2 fighters were shot down during the invasion of Kuwait on 2 August 1990. Kuwait claims to have shot down as many as 37 Iraqi aircraft. These claims have not been confirmed. In addition, 68 fixed wing aircraft and 13 helicopters were destroyed while on the ground, and 137 aircraft were flown to Iran and never returned.
The Coalition lost a total of 75 aircraft—52 fixed-wing aircraft and 23 helicopters–during Desert Storm, with 39 fixed-wing aircraft and 5 helicopters lost in combat. One coalition fighter was lost in air-air combat, a U.S. Navy F/A-18 piloted by Scott Speicher. Other Iraqi air to air claims surfaced over the years, all were disputed. One B-52G was lost while returning to its operating base on Diego Garcia, when it suffered a catastrophic electrical failure and crashed into the Indian Ocean killing 3 of the 6 crew members on board. The rest of the Coalition losses came from anti-aircraft fire. The Americans lost 28 fixed-wing aircraft and 15 helicopters; the British lost 7 fixed-wing aircraft; the Saudi Arabians lost 2; the Italians lost 1; and the Kuwaitis lost 1. During the invasion of Kuwait on 2 August 1990, the Kuwaiti Air Force lost 12 fixed-wing aircraft, which were destroyed on the ground, and 8 helicopters, 6 of which were shot down and 2 of which were destroyed while on the ground.
See also
Air engagements of the Gulf War
Gulf War Air Power Survey
List of Gulf War pilots by victories
References
Bibliography
External links
Bibliography of the Desert Shield and Desert Storm compiled by the United States Army Center of Military History
Airstrikes in Iraq |
158262 | https://en.wikipedia.org/wiki/Fail-deadly | Fail-deadly | Fail-deadly is a concept in nuclear military strategy that encourages deterrence by guaranteeing an immediate, automatic, and overwhelming response to an attack, even if there is no one to trigger such retaliation. The term fail-deadly was coined as a contrast to fail-safe.
Military usage
Fail-deadly operation is an example of second-strike strategy, in that aggressors are discouraged from attempting a first strike attack. Under fail-deadly nuclear deterrence, policies and procedures controlling the retaliatory strike authorize launch even if the existing command and control structure has already been neutralized by a first strike. The deterrent efficacy of such a system clearly depends on other nuclear-armed nations having foreknowledge of it. The Soviet Union used a fail-deadly system known as Dead Hand (codenamed "Perimeter"); after the collapse of the Soviet Union, Russia retained the system (although it is now only activated in times of crisis).
Fail-deadly can refer to specific technology components, or the controls system as a whole. The United Kingdom's fail-deadly policies delegate strike authority to submarine commanders in the event of a loss of command (using letters of last resort), ensuring that even when uncoordinated, nuclear retaliation can be carried out.
An example of the implementation of such a strategy could be: Ballistic missile submarines are ordered to surface at periodic intervals to receive communications indicating that no change has occurred in the defense condition. Should the submarines be unable to receive the proper command and control signals indicating normal, peacetime conditions, their orders would be to launch their nuclear missiles under the assumption that command and control structures had been destroyed in a nuclear attack and that retaliation was therefore necessary. All available means of verification and all due caution would naturally be applied. This approach is obviously exceptionally dangerous for a variety of reasons. The strategy's intended value lies in deterrence against attack on command, control, communications, and computer (see C4I) networks by any potential adversary.
Fail-deadly is also associated with massive retaliation, a deterrence strategy that ensures that the counterstrike will be conducted on a larger scale than the initial attack.
A dead man's switch can be used as a fail-deadly instrument, for instance a switch that must be constantly held to prevent the triggering of an explosive. This would ensure that a suicide bombing is not prevented by killing the person that has the bomb.
See also
Doomsday device
Fail-safe
Failing badly
Launch on warning
Mutual assured destruction
Dead man's switch
Special Weapons Emergency Separation System
Two Generals' Problem
Dead Hand (nuclear war)
AN/DRC-8 Emergency Rocket Communications System
Samson Option
Dr. Strangelove
References
Nuclear strategy
Nuclear command and control |
21993041 | https://en.wikipedia.org/wiki/EditDV | EditDV | EditDV was a video editing software released by Radius, Inc. in late 1997 as an evolution of their earlier Radius Edit product. EditDV was one of the first products providing professional-quality editing of the then new DV format at a relatively affordable cost ($999 including Radius FireWire capture card) and was named "The Best Video Tool of 1998". Originally EditDV was available for Macintosh only but in February 2000 EditDV 2.0 for Windows was released. With version 3.0 EditDV's name was changed to CineStream.
Features
Originally bundled with a FireWire card, EditDV 1.5 got updated into a less expensive software only package for use with the newer PowerMac G3 that came with a FireWire interface. Later, a scaled down version named EditDV 1.6.1 Unplugged was released as a freeware version next to EditDV 2.0.
Unlike many other applications at the time which transcoded video to M-JPEG for editing, EditDV provided lossless native editing of the DV format. Only transitions (such as dissolves or wipes), effects (such as rotating or scaling the video, adjusting the audio level, or adding titles) and filters (such as changing the brightness or color balance) needed to be rendered. This also had the disadvantage to not work with analogue video capture.
EditDV was built on top of QuickTime and supported QuickTime filters as well as its own built-in effects and transitions. Effects could be animated using keyframes. EditDV 2.0 worked natively with Quicktime MOV format. For Microsoft Windows users, where the standard was AVI, this required the use of a provided external conversion tool afterwards when AVI was wanted.
The user interface had a Project window for organising clips into bins, a Sequence window with a multi-track timeline for arranging clips into a program using three-point editing, and Source and Program monitor windows. A finished program could either be exported as a QuickTime movie or written back to DV tape using the "print to video" command.
Version 3.0, then renamed CineStream, shifted towards web designers who wanted to add video streaming interactivity to a website. The new feature called EventStream allowed setting clickable hot spots to link to another location, either to another page with a URL or to another video. This feature distinguished CineStream from the rest of the competition.
Product line
The EditDV product family included a number of related products, all sharing a similar name:
EditDV Video editing software (Mac and Windows)
SoftDV A QuickTime software codec for playing DV media, included as part of EditDV (Mac and Windows)
MotoDV PCI-based FireWire interface with DV capture software (Mac and Windows)
PhotoDV Software to capture high-quality stills from a DV tape using MotoDV hardware (Mac and Windows)
RotoDV Software for rotoscoping (painting over video), released in Sept 1999 (Macintosh only)
Name changes and eventual demise
In 1999 the company Radius Inc. changed its name to Digital Origin. In 2000 Digital Origin Inc (and EditDV) was bought by Media 100. In early 2001 Media 100 released an updated version of EditDV under the new name CineStream 3.0. Later that year (October 2001) Media 100 was bought by Autodesk's Discreet Division.
CineStream for Macintosh required classic Mac OS. It was never ported to Mac OS X and faced increasing competition on that platform from Apple's own Final Cut Pro application. Development of EditDV/Cinestream was officially discontinued in 2002.
References
Further reading
a review of EditDV 1.5
Film and video technology
Video editing software |
7637273 | https://en.wikipedia.org/wiki/Generic%20Model%20Organism%20Database | Generic Model Organism Database | The Generic Model Organism Database (GMOD) project provides biological research communities with a toolkit of open-source software components for visualizing, annotating, managing, and storing biological data. The GMOD project is funded by the United States National Institutes of Health, National Science Foundation and the USDA Agricultural Research Service.
History
The GMOD project was started in the early 2000s as a collaboration between several model organism databases (MODs) who shared a need to create similar software tools for processing data from sequencing projects. MODs, or organism-specific databases, describe genome and other information about important experimental organisms in the life sciences and capture the large volumes of data and information being generated by modern biology. Rather than each group designing their own software, four major MODs--FlyBase, Saccharomyces Genome Database, Mouse Genome Database, and WormBase—worked together to create applications that provide functionality needed by all MODs, such as software to help manage the data within the MOD, and to help users access and query the data.
The GMOD project works to keep software components interoperable. To this end, many of the tools use a common input/output file format or run off a Chado schema database.
Chado database schema
The Chado schema aims to cover many of the classes of data frequently used by modern biologists, from genetic data to phylogenetic trees to publications to organisms to microarray data to IDs to RNA/protein expression. Chado makes extensive use of controlled vocabularies to type all entities in the database; for example: genes, transcripts, exons, transposable elements, etc., are stored in a feature table, with the type provided by Sequence Ontology. When a new type is added to the Sequence Ontology, the feature table requires no modification, only an update of the data in the database. The same is largely true of analysis data that can be stored in Chado as well.
The existing core modules of Chado are:
sequence - for sequences/features
cv - for controlled-vocabs/ontologies
general - currently just dbxrefs
organism - taxonomic data
pub - publication and references
companalysis - augments sequence module with computational analysis data
map - non-sequence maps
genetic - genetic and phenotypic data
expression - gene expression
natural diversity - population data
Software
The full list of GMOD software components is found on the GMOD Components page. These components include:
Participating databases
The following organism databases are contributing to and/or adopting GMOD components for model organism databases.
Related projects
Bioperl, BioJava, Biopython, BioRuby, etc.
Ensembl
Gene Ontology
DAS
Genomics Unified Schema
Manatee: Manual Annotation Tool
Biocurator.org
Open Biomedical Ontologies
Sequence Ontology Project
See also
Biological database
Genome project
Genomics
Genome
Genome Compiler – an all-in-one software platform for DNA design & visualization, data management and collaboration.
References
External links
GMOD website
Model organism databases
Genomics
Bioinformatics software |
27698367 | https://en.wikipedia.org/wiki/Thursby%20DAVE | Thursby DAVE | DAVE was a commercial grade Microsoft Windows file and print sharing SMB/CIFS software for Apple Macs from Thursby Software Systems.
DAVE was first introduced in 1996. Microsoft DFS support was added in 2002.
Thursby co-wrote the Mac SMB/CIFS standards with Microsoft in 2002.
Although it is true that Mac OS X does have Samba support, the built-in SMB support with early OS X versions had limitations that DAVE attempted to overcome, offering better compatibility & performance for enterprise environments and specifically in networked use of Mac apps such as Final Cut Pro, Creative Suite, Avid and Office.
The ADmitMac and ADmitMac PKI products build on DAVE, adding enhanced support for Microsoft Active Directory and US government Public-key infrastructure systems.
In 2017, Thursby announced that DAVE and ADmitMac were now end-of-life products, writing:Thursday, July 27, 2017 (Arlington, TX) - Today Thursby Software Systems, Inc. (Thursby) announced the end-of-life for both their legacy DAVE® and ADmitMac® products for the Apple Macintosh.DAVE was first introduced in 1996, 21 years ago, as the first Microsoft SMB file-sharing client on the Mac. In 2003, Thursby introduced ADmitMac as the first Active Directory solution for the Mac.Since that time, Apple has continued to improve their operating system by inclusion of their own SMB and Active Directory technology, in part with the help of Thursby. With the recent announcement of the High Sierra operating system, Thursby believes that Apple has finally culminated a total Microsoft file systems to the quality that eliminates the need for either DAVE or ADmitMac.Thursby will continue to support customers under contract, but does not expect any enhancements for the next operating system release. Legacy licenses with 30-day configuration support will continue to be available to all customers.The final versions were DAVE v13 and ADmitMac v10.
References
Classic Mac OS software |
350294 | https://en.wikipedia.org/wiki/Game%20Boy%20Printer | Game Boy Printer | The Game Boy Printer, known as the in Japan, is a thermal printer accessory released by Nintendo in 1998 which ceased production in early 2003. The Game Boy Printer is compatible with all the Game Boy systems except the Game Boy Micro and is designed to be used in conjunction with the Game Boy Camera. It also prints images from compatible late-generation Game Boy and Game Boy Color games (listed below). It runs on six AA batteries and uses a proprietary 38mm wide thermal paper with adhesive backing, originally sold in white, red, yellow and blue colors. In Japan, a bright yellow Pokémon version of the Game Boy Printer was released, featuring a feed button in the style of a Poké Ball.
Games with Game Boy Printer support
Alice in Wonderland
Asteroids
Austin Powers: Oh, Behave!
Austin Powers: Welcome to My Underground Lair!
Cardcaptor Sakura: Itsumo Sakura-chan to Issho!
Cardcaptor Sakura: Tomoe Shōgakkō Daiundōkai
Disney's Dinosaur
Disney's Tarzan
Donkey Kong Country
E.T.: Digital Companion
Fisher-Price Rescue Heroes: Fire Frenzy
Game Boy Camera
Harvest Moon 2
Kakurenbo Battle Monster Tactics
Klax
The Legend of Zelda: Link's Awakening DX
The Little Mermaid 2: Pinball Frenzy
Little Nicky
Logical
Magical Drop
Mary-Kate and Ashley Pocket Planner
Mickey's Racing Adventure
Mickey's Speedway USA
Mission: Impossible
NFL Blitz
Perfect Dark
Pokémon Crystal
Pokémon Gold and Silver (except Korean versions)
Pokémon Pinball
Pokémon Trading Card Game
Pokémon Card GB2: Great Rocket-Dan Sanjō!
Pokémon Yellow: Special Pikachu Edition
Puzzled
Quest for Camelot
Roadsters
Super Mario Bros. Deluxe
Tony Hawk's Pro Skater 2
Trade & Battle: Card Hero
Game Boy Printer Thermal Paper
Released alongside the Game Boy Printer in 1998, Nintendo-manufactured thermal paper refill rolls were produced in white, cream, blue, yellow, and red colour variants, all of which had an integrated adhesive backing. They had a roll width of 38mm and a roll diameter of 30mm, with a central 12mm diameter red cardboard spindle. A typical roll had 390–400 cm of length. After powering the printer on, a clip at the rear of the protruding translucent grey refill housing is depressed, allowing this to be lifted away. The thermal paper roll inserted upside-down, unravelled end facing down, with this end being slotted into a thin slot. The maroon 'FEED' button is then pressed/held down, which engages the uptake motor, and pulls the paper through to the exit slot adjacent to the printer logo. This has an integrated serrator, which allows finished prints to be ripped in a zig-zag fashion off the main paper feed. Forcibly pulling the paper opposite to the feed direction causes permanent damage to the gearing within the feed mechanism.
When a picture printed from the Game Boy Camera, it would print with a 5mm margin above and below the picture and print the picture at a 23mm height. This would give the total of 33mm height per picture. Although on-box refill advertisements boasted up to 180 pictures per roll, in actuality a typical roll could only print between 118 and 121 pictures.
Sold on the official Nintendo e-Shop (as triple packs of blue, cream and white rolls) until 2007, Game Boy branded official replacement thermal paper is now difficult to source. Even brand-new, sealed, un-opened official rolls degrade relatively quickly once opened (if they were stored correctly and their seal has not failed). Most, however, have suffered degradation whilst in storage due to a chemical reaction between the thermal paper and adhesive backing layer. Due to the proprietary nature of the adhesive-backing, replacement thermal paper that is able to be adhered to surfaces once printed upon (including brands such as 'MAXStick') is prohibitively expensive.
Instead, the thermal paper rolls can be successfully substituted with a 38mm x 4m alternative, with or without ('core-less') spindle cores, without repercussions on the printer. Such rolls are also compatible with some hand-held printing calculators, such as the Canon TP-8, Texas Instruments 5000–2008, Sharp 8180, and Casio FX-802. Alternatively, wider rolls (such as 57mm x 30mm x 12.7mm) can be cut or trimmed to 38mm, and function without issue.
Please note, however, that due to the inherent limitations of thermal paper, photographs printed on thermal paper will fade over time (this depends heavily on the thermal paper variant used, and could be as short as a few months, or instead a few years) until the paper is virtually blank. Paper in this state can usually be re-used, as long as the length of the strip is long enough to be manually fed into the takeup.
It is unknown whether original Game Boy Printer paper contains the chemicals Bisphenol-A (BPA) or its analog Bisphenol-S (BPS). Previously very widely used in plastics and thermal receipt paper due to their heat resistance and stability, these are currently being phased out of thermal paper coatings due to their in-vivo accrual (via direct dermal absorption) and resultant oestrogen-mimicking and endocrine disruption. Modern thermal paper roll replacements, or their manufacturers, usually clearly state if they are Bisphenol free [BP-Free].
Game Boy Printer Protocol
The communication between the Game Boy and the Game Boy Printer is via a simple serial link. Serial clock (provided by the Game Boy for the printer), serial data output (from Game Boy to printer) as well as serial data input (to Game Boy from printer). The Game Boy sends a packet to the printer, to which the printer responds with an acknowledgement as well as a status code.
Packet Format
Communication is via the Game Boy sending to the printer a simple packet structure as shown below. In general, between the first "sync_word" til the checksum is the Game Boy communicating to the printer. The last two bytes of the packet are for the printer to acknowledge and show its current status code.
Command may be either Initialize (0x01), Data (0x04), Print (0x02), or Inquiry (0x0F).
Payload byte count size depends on the value of the `DATA_LENGTH` field.
Compression field is a compression indicator. No compression = 0x00
Checksum is a simple sum of bytes in command, data length, and the data payload.
Status byte is a bit-field byte indicating various status of the printer itself. (e.g. If it is still printing)
Commands
Initialize (0x01)
Typical Payload Size = 0
This packet is sent without a data payload. It signals to the printer to clear the settings and prepare for the first data payload.
Data (0x04)
Typical Payload Size = 640
The data packet is for transferring the image data to the printer data buffer. The typical size of the data payload is 640 bytes since it can store two printable rows of 20 standard Game Boy tile (2 bit color in 8x8 pixels grid), of which the Game Boy tile takes 16 bytes.
Print (0x02)
Typical Payload Size = 4
This commands the printer to start printing. It also has 4 settings bytes for printing.
Inquiry (0x0F)
Typical Payload Size = 0
Used for checking the printer status byte. This may be for checking if there is enough data in the printer buffer to start printing smoothly or if the printer is currently printing.
Printer Status Reply Byte
Usage today
Mad Catz and Xchanger sold a kit that enabled users to connect a Game Boy to a PC and print images using the PC's printer. Hobbyists outside the UK can also make their own cable for uploading images to their computer. A Game Boy Printer emulator is needed for the Game Boy to interface with the PC once linked via cable. The Game Boy Printer Paper has also been discontinued, and rolls of the genuine article that still produce a reliable image are becoming more difficult to find. Regular thermal paper, such as the kind used for POS terminals, can be cut to the proper width and used successfully with the Game Boy Printer.
The system will print a test message reading "Hello" if it is turned on while the feed button is held. According to the manual, this is used to test if the printer is functioning properly. To get around using six AA batteries (1.5 volts each) for the printer, a single 9V battery can be used if wired properly, because the printer requires 9V DC.
Notes
Further Information
Reverse Engineering
'Ben Heck Reverse Engineers Game Boy Printer': https://www.youtube.com/watch?v=43FfJvd-YP4
References
Game Boy accessories
Products introduced in 1998
Computer printers
Non-impact printing |
5311 | https://en.wikipedia.org/wiki/Computer%20programming | Computer programming | Computer programming is the process of performing a particular computation (or more generally, accomplishing a specific computing result), usually by designing/building an executable computer program. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms (usually in a chosen programming language, commonly referred to as coding). The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus usually requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
Tasks accompanying and related to programming include testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is a related process used by designers, analysts, and programmers to understand and re-create/re-implement.
History
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine.
In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine language
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.
Compiler languages
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware.
The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target for varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.
Source code entry
Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.
Modern programming
Quality requirements
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.
Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.
Readability of source code
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
Different indent styles (whitespace)
Comments
Decomposition
Naming conventions for objects (such as variables, classes, functions, procedures, etc.)
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (I.D.Es) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
Algorithmic complexity
The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
Chess algorithms as an example
"Programming a Computer for Playing Chess" was a 1950 paper that evaluated a "minimax" algorithm that is part of the history of algorithmic complexity; a course on IBM's Deep Blue (chess computer) is part of the computer science curriculum at Stanford University.
Methodologies
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
Measuring language usage
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.
Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Programming languages
Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Allen Downey, in his book How To Think Like A Computer Scientist, writes:
The details look different in different languages, but a few basic instructions appear in just about every language:
Input: Gather data from the keyboard, a file, or some other device.
Output: Display data on the screen or send data to a file or other device.
Arithmetic: Perform basic arithmetical operations like addition and multiplication.
Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
Repetition: Perform some action repeatedly, usually with some variation.
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Programmers
Computer programmers are those who write computer software. Their jobs usually involve:
Prototyping
Coding
Debugging
Documentation
Integration
Maintenance
Requirements analysis
Software architecture
Software testing
Specification
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.
See also
ACCU
Association for Computing Machinery
Computer networking
Hello world program
Institution of Analysts and Programmers
National Coding Week
Object hierarchy
System programming
Computer programming in the punched card era
The Art of Computer Programming
Women in computing
Timeline of women in computing
References
Sources
Further reading
A.K. Hartmann, Practical Guide to Computer Simulations, Singapore: World Scientific (2009)
A. Hunt, D. Thomas, and W. Cunningham, The Pragmatic Programmer. From Journeyman to Master, Amsterdam: Addison-Wesley Longman (1999)
Brian W. Kernighan, The Practice of Programming, Pearson (1999)
Weinberg, Gerald M., The Psychology of Computer Programming, New York: Van Nostrand Reinhold (1971)
Edsger W. Dijkstra, A Discipline of Programming, Prentice-Hall (1976)
O.-J. Dahl, E.W.Dijkstra, C.A.R. Hoare, Structured Programming, Academic Press (1972)
David Gries, The Science of Programming, Springer-Verlag (1981)
External links
Programming |
2021968 | https://en.wikipedia.org/wiki/Computerized%20physician%20order%20entry | Computerized physician order entry | Computerized physician order entry (CPOE), sometimes referred to as computerized provider order entry or computerized provider order management (CPOM), is a process of electronic entry of medical practitioner instructions for the treatment of patients (particularly hospitalized patients) under his or her care.
The entered orders are communicated over a computer network to the medical staff or to the departments (pharmacy, laboratory, or radiology) responsible for fulfilling the order. CPOE reduces the time it takes to distribute and complete orders, while increasing efficiency by reducing transcription errors including preventing duplicate order entry, while simplifying inventory management and billing.
CPOE is a form of patient management software.
Required data
In a graphical representation of an order sequence, specific data should be presented to CPOE system staff in cleartext, including:
identity of the patient
role of required member of staff
resources, materials and medication applied
procedures to be performed
operational sequence to be obeyed
feedback to be noted
case specific documentation to build
Some textual data can be reduced to simple graphics.
CPOE related terminology
CPOE systems use terminology familiar to medical and nursing staff, but there are different terms used to classify and concatenate orders. The following items are examples of additional terminology that a CPOE system programmer might need to know:
Filler
The application responding to, i.e., performing, a request for services (orders) or producing an observation. The filler can also originate requests for services (new orders), add additional services to existing orders, replace existing orders, put an order on hold, discontinue an order, release a held order, or cancel existing orders.
Order
A request for a service from one application to a second application. In some cases an application is allowed to place orders with itself.
Order detail segment
One of several segments that can carry order information. Future ancillary specific segments may be defined in subsequent releases of the Standard if they become necessary.
Placer
The application or individual originating a request for services (order).
Placer order group
A list of associated orders coming from a single location regarding a single patient.
Order Set
A grouping of orders used to standardize and expedite the ordering process for a common clinical scenario. (Typically, these orders are started, modified, and stopped by a licensed physician.)
Protocol
A grouping of orders used to standardize and automate a clinical process on behalf of a physician. (Typically, these orders are started, modified, and stopped by a nurse, pharmacist, or other licensed health professional.)
Features of CPOE systems
Features of the ideal computerized physician order entry system (CPOE) include:
Ordering Physician orders are standardized across the organization, yet may be individualized for each doctor or specialty by using order sets. Orders are communicated to all departments and involved caregivers, improving response time and avoiding scheduling problems and conflict with existing orders.
Patient-centered decision support The ordering process includes a display of the patient's medical history and current results and evidence-based clinical guidelines to support treatment decisions. Often uses medical logic module and/or Arden syntax to facilitate fully integrated Clinical Decision Support Systems (CDSS).
Patient safety features The CPOE system allows real-time patient identification, drug dose recommendations, adverse drug reaction reviews, and checks on allergies and test or treatment conflicts. Physicians and nurses can review orders immediately for confirmation.
Intuitive Human interface The order entry workflow corresponds to familiar "paper-based" ordering to allow efficient use by new or infrequent users.
Regulatory compliance and security Access is secure, and a permanent record is created, with electronic signature.
Portability The system accepts and manages orders for all departments at the point-of-care, from any location in the health system (physician's office, hospital or home) through a variety of devices, including wireless PCs and tablet computers.
Management The system delivers statistical reports online so that managers can analyze patient census and make changes in staffing, replace inventory and audit utilization and productivity throughout the organization. Data is collected for training, planning, and root cause analysis for patient safety events.
Billing Documentation is improved by linking diagnoses (ICD-9-CM or ICD-10-CM codes) to orders at the time of order entry to support appropriate charges.
Patient safety benefits
In the past, physicians have traditionally hand-written or verbally communicated orders for patient care, which are then transcribed by various individuals (such as unit clerks, nurses, and ancillary staff) before being carried out. Handwritten reports or notes, manual order entry, non-standard abbreviations and poor legibility lead to errors and injuries to patients, . A follow up IOM report in 2001 advised use of electronic medication ordering, with computer- and internet-based information systems to support clinical decisions. Prescribing errors are the largest identified source of preventable hospital medical error. A 2006 report by the Institute of Medicine estimated that a hospitalized patient is exposed to a medication error each day of his or her stay. While further studies have estimated that CPOE implementation at all nonrural hospitals in the United States could prevent over 500,000 serious medication errors each year. Studies of computerized physician order entry (CPOE) has yielded evidence that suggests the medication error rate can be reduced by 80%, and errors that have potential for serious harm or death for patients can be reduced by 55%, and other studies have also suggested benefits. Further, in 2005, CMS and CDC released a report that showed only 41 percent of prophylactic antibacterials were correctly stopped within 24 hours of completed surgery. The researchers conducted an analysis over an eight-month period, implementing a CPOE system designed to stop the administration of prophylactic antibacterials. Results showed CPOE significantly improved timely discontinuation of antibacterials from 38.8 percent of surgeries to 55.7 percent in the intervention hospital. CPOE/e-Prescribing systems can provide automatic dosing alerts (for example, letting the user know that the dose is too high and thus dangerous) and interaction checking (for example, telling the user that 2 medicines ordered taken together can cause health problems). In this way, specialists in pharmacy informatics work with the medical and nursing staffs at hospitals to improve the safety and effectiveness of medication use by utilizing CPOE systems.
Advantages
Generally, CPOE is advantageous, as it leaves the trails of just better formatting retrospective information, similarly to traditional hospital information systems designs. The key advantage of providing information from the physician in charge of treatment for a single patient to the different roles involved in processing he treatise itself is widely innovative. This makes CPOE the primary tool for information transfer to the performing staff and lesser the tool for collecting action items for the accounting staff. However, the needs of proper accounting get served automatically upon feedback on completion of orders.
CPOE is generally not suitable without reasonable training and tutoring respectively. As with other technical means, the system based communicating of information may be inaccessible or inoperable due to failures. That is not different from making use of an ordinary telephone or with conventional hospital information systems. Beyond, the information conveyed may be faulty or erratic. A concatenated validating of orders must be well organized. Errors lead to liability cases as with all professional treatment of patients.
Prescriber and staff inexperience may cause slower entry of orders at first, use more staff time, and is slower than person-to-person communication in an emergency situation. Physician to nurse communication can worsen if each group works alone at their workstations.
But, in general, the options to reuse order sets anew with new patients lays the basic for substantial enhancement of the processing of services to the patients in the complex distribution of work amongst the roles involved. The basic concepts are defined with the clinical pathway approach. However, success does not occur by itself. The preparatory work has to be budgeted from the very beginning and has to be maintained all the time. Patterns of proper management from other service industry and from production industry may apply. However, the medical methodologies and nursing procedures do not get affected by the management approaches.
Risks
CPOE presents several possible dangers by introducing new types of errors. Automation causes a false sense of security, a misconception that when technology suggests a course of action, errors are avoided. These factors contributed to an increased mortality rate in the Children's Hospital of Pittsburgh's Pediatric ICU when a CPOE systems was introduced. In other settings, shortcut or default selections can override non-standard medication regimens for elderly or underweight patients, resulting in toxic doses. Frequent alerts and warnings can interrupt work flow, causing these messages to be ignored or overridden due to alert fatigue. CPOE and automated drug dispensing was identified as a cause of error by 84% of over 500 health care facilities participating in a surveillance system by the United States Pharmacopoeia. Introducing CPOE to a complex medical environment requires ongoing changes in design to cope with unique patients and care settings, close supervision of overrides caused by automatic systems, and training, testing and re-training all users.
Implementation
CPOE systems can take years to install and configure. Despite ample evidence of the potential to reduce medication errors, adoption of this technology by doctors and hospitals in the United States has been slowed by resistance to changes in physician's practice patterns, costs and training time involved, and concern with interoperability and compliance with future national standards. According to a study by RAND Health, the US healthcare system could save more than 81 billion dollars annually, reduce adverse medical events and improve the quality of care if it were to widely adopt CPOE and other health information technology. As more hospitals become aware of the financial benefits of CPOE, and more physicians with a familiarity with computers enter practice, increased use of CPOE is predicted. Several high-profile failures of CPOE implementation have occurred, so a major effort must be focused on change management, including restructuring workflows, dealing with physicians' resistance to change, and creating a collaborative environment.
An early success with CPOE by the United States Department of Veterans Affairs (VA) is the Veterans Health Information Systems and Technology Architecture or VistA. A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's record at any computer in the VA's over 1,000 healthcare facilities. CPRS includes the ability to place orders by CPOE, including medications, special procedures, x-rays, patient care nursing orders, diets and laboratory tests.
The world's first successful implementation of a CPOE system was at El Camino Hospital in Mountain View, California in the early 1970s. The Medical Information System (MIS) was originally developed by a software and hardware team at Lockheed in Sunnyvale, California, which became the TMIS group at Technicon Instruments Corporation. The MIS system used a light pen to allow physicians and nurses to quickly point and click items to be ordered.
, one of the largest projects for a national EHR is by the National Health Service (NHS) in the United Kingdom. The goal of the NHS is to have 60,000,000 patients with a centralized electronic health record by 2010. The plan involves a gradual roll-out commencing May 2006, providing general practices in England access to the National Programme for IT (NPfIT). The NHS component, known as the "Connecting for Health Programme", includes office-based CPOE for medication prescribing and test ordering and retrieval, although some concerns have been raised about patient safety features.
In 2008, the Massachusetts Technology Collaborative and the New England Healthcare Institute (NEHI) published research showing that 1 in 10 patients admitted to a Massachusetts community hospital suffered a preventable medication error. The study argued that Massachusetts hospitals could prevent 55,000 adverse drug events per year and save $170 million annually if they fully implemented CPOE. The findings prompted the Commonwealth of Massachusetts to enact legislation requiring all hospitals to implement CPOE by 2012 as a condition of licensure.
In addition, the study also concludes that it would cost approximately $2.1 million to implement a CPOE system, and a cost of $435,000 to maintain it in the state of Massachusetts while it saves annually about $2.7 million per hospital. The hospitals will still see payback within 26 months through reducing hospitalizations generated by error. Despite the advantages and cost savings, the CPOE is still not well adapted by many hospitals in the US.
The Leapfrog's 2008 survey showed that most hospitals are still not complying with having a fully implemented, effective CPOE system. The CPOE requirement became more challenging to meet in 2008 because the Leapfrog introduced a new requirement: Hospitals must test their CPOE systems with Leapfrog's CPOE Evaluation Tool. So the number of hospitals in the survey considered to be fully meeting the standard dropped to 7% in 2008 from 11% the previous year. Though the adoption rate seems very low in 2008, it is still an improvement from 2002 when only 2% of hospitals met this Leapfrog standard.
See also
Continuity of Care Record
Electronic health record
Electronic medical record
Electronic prescribing
Health informatics
Pharmacy informatics
VistA – Veterans Health Information Systems and Technology Architecture
References
External links
Certification Commission for Healthcare Information Technology (CCHIT)
AHRQ National Resource Center for Health IT
Nationwide Electronic Requisition Network™
Health informatics
Medical terminology |
5232022 | https://en.wikipedia.org/wiki/Moneydance | Moneydance | Moneydance is a personal finance software application developed by The Infinite Kind, formerly developed by Reilly Technologies, USA. Written in Java, it can be run on many different computers and operating systems. Under the hood, Moneydance implements a double-entry bookkeeping system, but the user interface is geared towards non-accountants.
Moneydance implements the OFX protocol to perform online banking and bill payment. Other features include check printing, graphing and reporting, scheduled transaction reminders, transaction tags, VAT/GST tracking, budget management and tracking, file encryption, and investment portfolio management.
Moneydance has been localized into French, German, UK English, Norwegian, Greek (partially), Spanish, Portuguese and Italian. UK supermarket Tesco's "Personal Finance" software is based on Moneydance.
An open application programming interface (API) is also available, allowing people to write extensions to the program.
The application is scriptable in jython.
Releases
Moneydance 2008
Moneydance 2010
Moneydance 2011
Moneydance 2012
Moneydance 2012.2
Moneydance 2012.5
Moneydance 2014
Moneydance 2015
Moneydance 2015.2
Moneydance 2015.3
Moneydance 2015.4
Moneydance 2015.6
Moneydance 2015.7
Moneydance 2017
Moneydance 2017.2
Moneydance 2017.3
Moneydance 2017.5
Moneydance 2019
Moneydance 2019.1
Moneydance 2021
Moneydance 2021
Moneydance 2022
Moneydance 2022.2
See also
List of personal finance software
References
External links
2007 MacWorld Review
Accounting software
Accounting software for Linux
Java (programming language) software |
336076 | https://en.wikipedia.org/wiki/Look%20and%20feel | Look and feel | In software design, the look and feel of a graphical user interface comprises aspects of its design, including elements such as colors, shapes, layout, and typefaces (the "look"), as well as the behavior of dynamic elements such as buttons, boxes, and menus (the "feel"). The term can also refer to aspects of a non-graphical user interface (such as a command-line interface), as well as to aspects of an API – mostly to parts of an API that are not related to its functional properties. The term is used in reference to both software and websites.
Look and feel applies to other products. In documentation, for example, it refers to the graphical layout (document size, color, font, etc.) and the writing style. In the context of equipment, it refers to consistency in controls and displays across a product line.
Look and feel in operating system user interfaces serves two general purposes. First, it provides branding, helping to identify a set of products from one company. Second, it increases ease of use, since users will become familiar with how one product functions (looks, reads, etc.) and can translate their experience to other products with the same look and feel.
In widget toolkits
Contrary to operating system user interfaces, for which look and feel is a part of the product identification, widget toolkits often allow users to specialize their application look and feel, by deriving the default look and feel of the toolkit, or by completely defining their own. This specialization can go from skinning (that only deals with the look, or visual appearance of the graphical control elements) to completely specializing the way the user interacts with the software (that is, the feel).
The definition of the look and feel to associate with the application is often done at initialization, but some Widget toolkits, such as the Swing widget toolkit that is part of the Java API, allow users to change the look and feel at runtime (see Pluggable look and feel).
Some examples of Widget toolkits that support setting a specialized look and feel are:
XUL (XML User Interface Language): The look and feel of the user interface can be specialized in a CSS file associated with the XUL definition files. Properties that can be specialized from the default are, for example, background or foreground colors of widgets, fonts, size of widgets, and so on.
Swing supports specializing the look and feel of widgets by deriving from the default, another existing one, creating one from scratch, or, beginning with J2SE 5.0, in an XML property file called synth (skinnable look and feel).
Lawsuits
Some companies try to assert copyright of trade dress over their look and feel.
The Broderbund v. Unison (1986) case was an early software copyright case that attempted to apply U.S. copyright law to the look and feel presented by a software product.
In 1987 Lotus sued Paperback Software and Mosaic for copyright infringement, false and misleading advertising, and unfair competition over their low-cost clones of 1-2-3, VP Planner and Twin, and sued Borland over its Quattro spreadsheet.
In December 1989, Xerox
sued Apple over the Macintosh copyright.
Apple Computer was notable for its use of the term look and feel in reference to their Mac OS operating system. The firm tried, with some success, to block other software developers from creating software that had a similar look and feel. Apple argued that they had a copyright claim on the look and feel of their software, and even went so far as to sue Microsoft, alleging that the Windows operating system was illegally copying their look and feel.
Although provoking a vehement reaction from some in the software community, and causing Richard Stallman to form the League for Programming Freedom, the expected landmark ruling never happened, as most of the issues were resolved based on a license that Apple had granted Microsoft for Windows 1.0. See: Apple v. Microsoft. The First Circuit Court of Appeals rejected a copyright claim on the feel of a user interface in Lotus v. Borland.
Taking it to the street
GNU's Richard M. Stallman led an "Innovate, don't litigate" public demonstration outside Lotus's headquarters, using a hexadecimal chant:
1-2-3-4 kick the lawsuits out the door
5-6-7-8 innovate, don't litigate
9-A-B-C interfaces should be free
D,E,F,0 look and feel has got to go!
More recent reactions
In 2012 and 2014, Apple Inc. has filed lawsuits against competing manufacturers of smartphones and tablet computers, claiming that those manufacturers copied the look and feel of Apple's popular iPhone and iPad products.
In APIs
An API, which is an interface to software which provides some sort of functionality, can also have a certain look and feel. Different parts of an API (e.g. different classes or packages) are often linked by common syntactic and semantic conventions (e.g. by the same asynchronous execution model, or by the same way object attributes are accessed). These elements are rendered either explicitly (i.e. are part of the syntax of the API), or implicitly (i.e. are part of the semantics of the API).
See also
Lotus "look and feel" lawsuit
Skeuomorph
Structure, sequence and organization
Trade dress
References
External links
Java Look and Feel collection
The Java Tutorials: Modifying the Look and Feel
Usability
User interfaces
Graphical user interfaces
Graphical user interface elements
Legal research |
46833480 | https://en.wikipedia.org/wiki/Marek%20Rosa | Marek Rosa | Marek Rosa is a Slovak entrepreneur, programmer, and computer game developer. He is known as the CEO and founder of Keen Software House, an independent video game design studio, and CEO, Founder, and CTO of GoodAI, a company dedicated to the research and development of general artificial intelligence. Both companies are based in Prague, the Czech Republic, and have their headquarters in the historical Oranžérie.
Keen Software House
Marek started as a programmer working independently on the Miner Wars games and the VRAGE engine. In 2010 he founded Keen Software House.
Miner Wars
In 2012, Keen Software House release their first video game, Miner Wars Arena. Later that year they released their second game, Miner Wars 2081.
Space Engineers
Keen Software House and Marek gained notoriety for their third game Space Engineers, which has sold over 4 million copies. Space Engineers is a voxel-based sandbox game that launched on Steam Early Access on October 23, 2013, and fully released on February 28, 2019.
Space Engineers presents the player with an open world sandbox that is defined by the players own creativity. Based on real science, Space Engineers features a realistic, volumetric-based physics engine: everything in the game can be assembled, disassembled, damaged and destroyed.
During the majority of its developmental life, Space Engineers was updated on a weekly basis based on stated development goals and community feedback. Since initial launch it experienced major updates adding survival mode, multiplayer, dedicated server support, planets, and more. Space Engineers is open to community creation and modding.
On October 20, 2014, Keen Software House announced that Space Engineers had sold over 1,000,000 copies.
On May 14, 2015, the development firm provided open access (but not making the game free) to the source code to accelerate mod development.
The Space Engineers population & player count has continued to grow over the years with its largest increases in concurrent players occurring over the last two years (2019/2020).
Space Engineers was well received by both critics and the gaming community. According to steam as of 22 February 2021 89% of the 74,301 reviews have been positive.
Medieval Engineers
On January 13, 2015, Keen Software House announced their third title and second engineering game named Medieval Engineers. It was also announced that the game will be available on Steam Early Access.
Medieval Engineers is a sandbox game about engineering, construction and the maintenance of architectural works and mechanical equipment using medieval technology. Players build cities, castles and fortifications; construct mechanical devices and engines; perform landscaping and underground mining.
Medieval Engineers is inspired by real medieval technology and the way people survived and built architectural and mechanical works in medieval times. Medieval Engineers strives to follow the laws of physics and real history and does not use technologies that were not available from the 5th to the 15th century.
GoodAI
In 2014 Marek founded GoodAI with a $10 million personal investment and in 2015 the company was announced publicly. Marek is the CEO and CTO of GoodAI and set the mission which is to “develop safe general artificial intelligence - as fast as possible - to help humanity and understand the universe.”
In November 2016 Marek published the first research roadmap, outlining the direction of GoodAI focusing on developing topology architectures that support the gradual accumulation of skills.
In January 2017 Marek announced the creation of the AI Roadmap Institute, which aims to accelerate the creation of safe human-level artificial intelligence by encouraging, studying, mapping and comparing roadmaps towards this goal.
In February 2017 Marek founded the General AI Challenge, pledging $5 million in prize money to tackle crucial research problems in human-level AI development. The first round of the General AI Challenge was based around Gradual Learning and concluded in September 2017, and the second round was based on Solving the AI Race and concluded in July 2018.
In August 2018 Marek and GoodAI hosted the Human-Level AI Conference in Prague. For his work with GoodAI and involvement with the Human-Level AI Conference, Marek was awarded the Person of the Year Award at the AI Awards in Prague.
In December 2019 Marek and the GoodAI team published the Badger architecture paper, a unifying AI architecture defined by its key principle of modular life-long learning. The Badger architecture defines the direction of GoodAI's research.
In August 2020 at the Meta-Learning and Multi-Agent Learning Workshop, which was organized by GoodAI, Marek announced the GoodAI Grants initiative. A $300,000 grant fund for artificial intelligence research.
Oranžérie reconstruction
Marek purchased the historical baroque chateau, the Oranžérie in 2018. He has reconstructed the building and it is now the headquarters of both GoodAI and Keen Software House.
References
External links
Marek Rosa's blog
1979 births
Living people
Video game producers
Slovak video game designers |
106430 | https://en.wikipedia.org/wiki/Strife%20%281996%20video%20game%29 | Strife (1996 video game) | Strife (also known as Strife: Quest for the Sigil) is a first-person shooter role-playing video game developed by Rogue Entertainment. It was released in May 1996 in North America by Velocity Inc. and in Europe by Studio 3DO. The shareware version was released on February 23, 1996, while the full version was released on May 31, 1996. It was the last commercially released standalone PC game to utilize the id Tech 1 engine from id Software. The plot takes place in a world taken over by a religious organization known as "The Order"; the protagonist, an unnamed mercenary (sometimes referred to as Strifeguy), becomes a member of the resistance movement which aims to topple the Order's oppressive rule.
Strife added some role-playing game elements to the classic first-person shooter formula, such as allowing players to talk to other characters in the game's world or improve the protagonist's abilities. Contemporary reviews praised these innovations and the story, but also criticized the quality of the graphics and the obsolete engine. Years after its release, the game was retrospectively considered to have been underappreciated in its day, and described as a precursor to games such as Deus Ex.
An enhanced version of the game, Strife: Veteran Edition (also dubbed The Original Strife: Veteran Edition) was developed and published by Night Dive Studios and released on Steam on December 12, 2014. A Nintendo Switch port of Veteran Edition was released on October 25, 2020.
Gameplay
Strifes gameplay is standard for the first-person shooter genre; the action is observed from the protagonist's viewpoint, and most of the game involves combat with the Order's infantry and war robots. The main character begins with just a dagger, but more powerful weapons, such as a crossbow or a flamethrower can be found throughout the game. The protagonist also has an inventory where he can keep items for later use, such as first aid kits for healing, protective armor, or gold coins.
The game contains numerous friendly or neutral characters with whom the player can converse or trade. These characters often assign the player character missions, thus advancing the plot. If the player fires a weapon while not in a combat situation, it usually sets off an alarm and makes the guards attack the protagonist; however, certain weapons – the dagger and poisonous crossbow bolts – allow attacking enemies stealthily without activating the alarm.
Unlike most other first-person shooters of the time, Strife does not follow a linear series of levels. Instead, the town of Tarnhill acts as a central hub from which the player can travel back and forth between various areas, which stay the same as they were when the player left them.
The player character has two numerical attributes, accuracy and stamina, which can be improved at certain points in the game. The first attribute increases the accuracy of ranged weapons, while the latter increases the maximum amount of health.
Plot
The game is set some time after a catastrophic comet impact, which brought a deadly virus onto the planet. The resulting plague caused deaths of millions of people, while other victims were mutated and began hearing the voice of a malevolent deity. They formed an organization called "The Order" and enslaved the rest of the populace. However, a rag-tag resistance movement, called "The Front", is trying to topple The Order's reign.
The unnamed protagonist of the game is a wandering mercenary, captured by Order troops near the town of Tarnhill. After killing the guards and escaping, he comes in contact with a man named Rowan, who makes him an offer to join the Front. The protagonist receives a communication device through which he can remain in contact with a female member of the Front, codenamed Blackbird. From then on, Blackbird provides assistance and commentary throughout the game. The protagonist heads to the Front's base, where the rebel leader, Macil, sends him on a number of missions in order to weaken the Order. After several acts of sabotage, the Front proceeds to assault the Order's castle; the protagonist, accompanying them in the attack, finds and kills a major member of the Order called "The Programmer". He loses consciousness upon touching the weapon that the Programmer had been using.
The mercenary wakes up in the castle, now taken over by the Front. Macil explains that the Programmer's weapon is one of the five fragments of the "Sigil", a powerful weapon worshipped by the Order. He orders the protagonist to find the remaining four. To this end, the mercenary visits a knowledgeable being called "The Oracle", who reveals that the next fragment is being held by another of the Order's leaders, The Bishop. After killing the Bishop and acquiring the second fragment, the protagonist returns to the Oracle only to be told that the third fragment is being held by Macil himself; the Oracle claims that Macil is a traitor who has been using the protagonist as a pawn in his scheme. At this point, the player must make a decision: either disbelieve the Oracle and kill it, or trust the Oracle and kill Macil. The choice has bearing on the rest of the plot.
Assuming the player trusts Macil and kills the Oracle - acquiring the third fragment - he receives another task from Macil: to deactivate a factory, built on the comet's impact site, where the Order is turning captured people into "bio-mechanical soldiers." Upon completing his mission, the protagonist learns that Macil has gone insane; he returns to the base and attempts to speak to Macil, who declares in his madness that he wishes to free the "one god", then attacks the protagonist. Upon killing Macil, the protagonist receives the fourth Sigil fragment. He then returns to the factory, where lies the laboratory of the Loremaster, another of the Order's leaders. After killing Loremaster and thus acquiring the final Sigil piece, he proceeds to use the weapon to unlock a door leading to the comet's impact site. Inside, he finds an extraterrestrial spaceship. Within the ship waits an alien being known as "The Entity"; it is the one responsible for creating the Order and taking over the minds of mutated people. The mercenary kills it with the Sigil; its death means the end of the Order. He then finally meets Blackbird face to face. She tells him that his victory allowed mankind to create a vaccine for the virus, then kisses him.
The plot takes a different direction if the player decides to trust the Oracle and immediately kill Macil. Once he does so (claiming Macil's Sigil piece in the process), the Oracle dispatches him to the Loremaster's laboratory. Having killed the Loremaster and obtained the fourth fragment of the Sigil, the protagonist returns to the Oracle, who then reveals that it was using him all along in a bid to acquire the complete Sigil, use it to free the "one god", and attain eternal life. The mercenary kills the Oracle and, with all five fragments of the Sigil now in his possession, heads to the alien ship. There he encounters the Entity; however, the being speaks with Blackbird's voice, and implies that it was manipulating the protagonist throughout the game in order to regain freedom and take over the planet. After killing the Entity, the ending sequence is shown, this time less optimistic: the cure for the virus has not been invented and mankind's future is uncertain.
Development
Strife was originally being developed by Cygnus Studios, the creators of Raptor: Call of the Shadows, for id Software. Cygnus Studios was already working on another first-person shooter role-playing game, The Second Sword, utilizing the older ShadowCaster game engine by id Software. However, id Software asked Cygnus Studios to create a game based on the Doom engine and requested that the studio be relocated to Texas from Chicago while they were still developing Raptor so that they would be able to develop the new title for id Software. The studio agreed and after relocating, the studio no longer wanted to work on The Second Sword and the title was dropped in favor of working on Strife.
Cygnus Studios worked on Strife for a few months following Raptor being finished. However, soon internal conflicts arose and employees mutinied against Scott Host, the original founder of Cygnus Studios, and he re-located back to Chicago and Strife was essentially cancelled. The mutineers then founded Rogue Entertainment and resumed development on Strife. The shareware version was released on February 23, 1996, while the full version was released on May 31, 1996. It was the last commercially released standalone PC game to utilize the id Tech 1 engine from id Software.
Community support
After the game's official support ended, game engine recreations of Strife were created by Doom source port developers through reverse engineering. Notably authors were Jānis Legzdiņš (author of the Doom source port Vavoom), Randy Heit (author of ZDoom), Samuel Villarreal (author of SvStrife), and James Haley (author with Samuel Villareal of Chocolate Strife). Except for the last, these allow for high resolution graphics modes, better mouselook, and expanded modding capabilities.
Strife was ported to the Commodore Amiga in 2013.
Digital re-release
In 2014 Night Dive Studios coordinated the digital re-release of Strife as Strife: Veteran Edition (or The Original Strife: Veteran Edition), after acquiring rights to the game. Because the game's source code had been lost by Rogue Entertainment, a derivative of the Chocolate Doom subproject Chocolate Strife was used as the game's engine, with its original programmers being contracted to do additional coding for the re-release. The source code of Strife: Veteran Edition was made available under GNU GPL-2.0-or-later on github.com. The enhanced version of the game was released as Strife: Veteran Edition by Night Dive Studios on Steam on December 12, 2014.
Reception
Strife received quite positive reviews upon its release. Reviewers took note of the novel gameplay: unlike most previous first-person shooters, such as Doom, the player must cooperate with friendly characters in Strife, while killing everyone in sight ends badly. Some reviews praised the story, but the reviewer in Next Generation criticized it for being too linear and the choices being illusionary.
The voice-acting in the game was generally rated positively; in particular, the voice and personality of Blackbird was praised by some reviewers, with GameSpot's Tal Blevins describing her voice as "by far the sexiest thing to ever resonate from my computer speakers."
Strifes graphics were overall rated poorly. The reviewers criticized the game for using the obsolete Doom engine, which they considered especially jarring when compared to more modern games such as Duke Nukem 3D or Quake. However, the hand-drawn illustrations which appear during dialogues and cutscenes were generally considered to be of good quality. Some reviewers doubted the novelty of the gameplay, pointing out that similar ideas were already used in e.g. CyberMage: Darklight Awakening. Another aspect of the game which drew some ire was limiting the player to a single savegame slot, especially since making a wrong decision could make the game unwinnable.
Years after its release, retrospective reviews of the game were more positive. The reviewer of jeuxvideo.com recommended Strife for first-person shooter fans. Two articles about the game were published on the PC Gamer website. Richard Cobbett described the game in his "Saturday Crapshoot" series; he considered Strife to be an underappreciated game, but believed that it had not aged well. Paul Dean likewise concluded that Strife did not receive the attention it deserved back in its day, and encouraged readers to play the game. Both journalists compared Strife to the later Deus Ex, a more commercially successful attempt to combine the first-person shooter formula with role-playing elements.
References
External links
Strife Veteran Edition GPL Source Release on GitHub
1996 video games
Amiga 1200 games
Doom (franchise)
DOS games
Nintendo Switch games
First-person shooters
Linux games
Video games developed in the United States
Video games with 2.5D graphics
Video games with digitized sprites
Doom engine games
Windows games
Amiga games
Open-source video games
Commercial video games with freely available source code
Post-apocalyptic video games
Sprite-based first-person shooters
Works set in castles |
584820 | https://en.wikipedia.org/wiki/Source%20port | Source port | A source port is a software project based on the source code of a game engine that allows the game to be played on operating systems or computing platforms with which the game was not originally compatible.
Description
Source ports are often created by fans after the original developer hands over the maintenance support for a game by releasing its source code to the public (see List of commercial video games with later released source code). The term was coined after the release of the source code to Doom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game. Since the majority of Doom players were DOS users the first step for a fan project was to port the Linux source code to DOS. A legitimate source port includes only the engine portion of the game and requires that the data files of the game in question already be present on users' systems. Source ports are in no way meant to encourage copyright infringement of software.
Source ports share the similarity with unofficial patches that both don't change the original gameplay as such projects are by definition mods. However many source ports add support for gameplay mods, which is usually optional (e.g. DarkPlaces consists of a source port engine and a gameplay mod that are even distributed separately). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and different aspect ratios, hardware accelerated renderers (OpenGL and/or Direct3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of 2.5D games), higher resolution textures, support to replace MIDI with digital audio (MP3, Ogg Vorbis, etc.), and enhanced multiplayer support using the Internet.
Several source ports have been created for various games specifically to address online multiplayer support. Most older games were not created to take advantage of the Internet and the low latency, high bandwidth Internet connections available to computer gamers today. Furthermore, old games may use outdated network protocols to create multiplayer connections, such as IPX protocol, instead of Internet Protocol. Another problem was games that required a specific IP address for connecting with another player. This requirement made it difficult to quickly find a group of strangers to play with — the way that online games are most commonly played today. To address this shortcoming, specific source ports such as Skulltag added "lobbies", which are basically integrated chat rooms in which players can meet and post the location of games they are hosting or may wish to join. Similar facilities may be found in newer games and online game services such as Valve's Steam, Blizzard's battle.net, and GameSpy Arcade.
Alternatives
If the source code of a software is not available, alternative approaches to achieve portability are Emulation, Engine remakes, and Static recompilation.
Notable source ports
See also
Enhanced remake
Game engine recreation
Static recompilation
Unofficial patch
List of commercial video games with later released source code
Fork (software development)
References
External links
Software maintenance
Software release
Unofficial adaptations |
30787757 | https://en.wikipedia.org/wiki/Jordan%20Ritter | Jordan Ritter | Jordan Ritter (born February 1, 1978) is an American serial entrepreneur, software architect and angel investor. He is best known for his work at Napster, the file-sharing service he co-founded along with Shawn Fanning and others. His time at Napster was documented in Joseph Menn's book All the Rave: The Rise and Fall of Shawn Fanning's Napster and Alex Winter's film Downloaded.
Early life
Jordan Ritter was born in Northridge, California and grew up in Texas and Florida. Ritter skipped the 5th grade when he was 10, and later went on to graduate from the International Baccalaureate Program at Hillsborough High School. Ritter attended college at Lehigh University on scholarship, starting as a sophomore and pursuing a double major in music and computer science. He dropped out in 1998, relocating to Boston, Massachusetts to begin his career in computer security.
Career
Netect
Ritter started out in the computer security industry, working as a paid hacker for the Boston office of Israeli computer security company Netect. While his main focus was probing major software and online systems for vulnerabilities, he also fixed code and conducted security audits for the company's own software HackerShield.
During his tenure, Ritter discovered and published several serious security vulnerabilities, including an anonymous, remote administrative privilege escalation in Washington University's FTP server. At the time, this affected approximately 80% of all computers on the Internet.
Early in 1999, Netect was purchased by BindView. Ritter was retained in the acquisition.
Napster
While working for BindView, Ritter met Shawn Fanning online through an IRC channel for computer hackers called #!w00w00. In May 1999, Fanning began soliciting Ritter and several other w00w00 members for help.
Since Fanning initially refused to allow inspection of the source code, members took this as a challenge and began reverse-engineering various aspects of the service. Ritter and fellow w00w00 member Seth McGann focused on the protocol and backend software, identifying bugs and proposing likely fixes to Fanning, while w00w00 member Evan Brewer managed the system that the server ran on. In early June 1999, Fanning asked Ritter to fully take over development of the server while Fanning focused on the Windows client. Two months later, Yosi Amram invested $250,000 in Napster and required that company operations relocate from Massachusetts to California. Ritter moved to Silicon Valley in September 1999, initially sharing an apartment with Fanning and Sean Parker at the San Mateo Marriott Residence Inn.
Ritter was directly responsible for many of the key evolutions of the backend service architecture during Napster's period of hyper-growth, including its novel load-balancing system, MySQL and subsequent Oracle database integration, and transparent full-mesh server linking. In addition to leading the backend team, Ritter also managed production systems deployment and network security, database systems and supporting infrastructure, and served as primary public contact point for all security-related issues concerning the service and operations. Ritter also oversaw the Moderator Community, a group of individuals who volunteered their free time to help moderate the various socially-focused portions of the Napster service.
Ritter resigned from Napster on November 14, 2000.
Cloudmark
In the summer of 2001, Ritter started development on anti-spam technology using machine-learning statistical classification algorithms. Named Spilter, the software was originally an open-source system that ran on UNIX-compatible messaging infrastructure such as Sendmail, Postfix and Qmail. Following the success of the product's first release, Ritter was convinced to pursue it instead as a commercial enterprise, which led him to close-source the software and begin mapping out a business plan.
Later that summer, Ritter discovered an open-source collaborative filtering program for email called Vipul's Razor, authored by Vipul Ved Prakash. Ritter reached out to Prakash (IRC nickname: hackworth) over an IRC channel for Perl programmers called #perl, proposing the two join their respective technologies together and form a company around the result. After a week of brainstorming on a whiteboard, the two agreed to form SEPsoft (Sodalitas Eliminetatum Purgamentum). Prakash later proposed using the name of Cloudmark instead, which is a planet-sized, inter-galactic messaging router featured in the book A Fire Upon the Deep by Vernor Vinge.
Throughout his tenure during 2001-2006, Ritter oversaw the architectural design and implementation of all Cloudmark commercial software, systems and operating infrastructure.
Ritter resigned as CTO in February 2006.
Columbia Music Entertainment
Ritter was introduced to Columbia Music Entertainment (CME) CEO Sadahiko Hirose in early 2006 during a business trip to Tokyo. Hirose-san planned to modernize CME by building a digital media distribution platform for its 100-year-spanning catalog of music, much of which was being lost to physical degradation and abandoned digital recording formats.
Ritter joined CME in February 2006 as Executive Advisor to the CEO. In April 2006 he became CTO.
In 2007, Ritter hired Ejovi Nuwere into CME, and together they began building a Japanese-based, competition-oriented promotional platform for new artists called Otorevo. The premise of the project was to prove a more cost- and time-efficient model for discovering viable artists to join the label, while at the same time establishing the first foothold for what would become CME's digital media platform. Despite the measurable successes of Otorevo, the CME Board of Directors voted to terminate all R&D projects in March 2008.
Ritter left CME in April 2008.
Zivity
After returning from Japan, Ritter was asked by Zivity Founders Scott Banister and Cyan Banister to advise the company on internal engineering management issues. Shortly afterwards, Ritter joined as CTO in order to overhaul the engineering organization while a CEO search was being conducted. In December 2008, a new CEO was appointed and Ritter left the company.
Ritter is an investor in Zivity.
CloudCrowd/Servio
Ritter founded crowdsourcing company CloudCrowd with Alex Edelstein in April 2009. 6 months later, CloudCrowd officially launched its work platform on Facebook. In December 2010 the company renamed itself Servio while retaining the CloudCrowd brand, in order to more effectively differentiate the value propositions between online work and the crowdsourced work product.
As CTO, Ritter oversaw all architectural design and development of the company's services, systems and production infrastructure. As Head of Engineering, he also directly managed all Engineering, Ops, IT and QA personnel.
Ritter left Servio in December 2012.
Atlas Recall
A search engine for an individuals information normally stored in different email and storage systems.
Other accomplishments
Entrepreneur Magazine's "100 Most Brilliant Companies", June 2010
Nominated for Software Designer of the Year, WIRED Rave Awards, 2002
Interviewed in Playboy Magazine, April 2001
Member of InfoWorld CTO Advisory Council, 2001
Co-author of 4 US patents and 2 EU patents
Open-source
Ritter is a lifelong contributor to open-source software and the free software movement.
Notable software projects include:
ngrep
orapp
mongo-locking distributed locking service in Ruby for MongoDB
DataObjects SQL API layer for Ruby
DataMapper ORM framework for Ruby
DataMapper Salesforce adapter
References
External links
1978 births
Living people
American computer businesspeople |
51556069 | https://en.wikipedia.org/wiki/University%20of%20Chakwal | University of Chakwal | The University of Chakwal (UOC) was established in January, 2020 through the University of Chakwal ACT 2019 (Punjab Act No. I of 2020). An exalted chapter was added in the history of Pakistan as well as the Chakwal district, when the incumbent Minister for Higher Education Department, Govt. of the Punjab, Raja Yassir Humayun, presented the bill of establishment of "University of Chakwal" in the Punjab Assembly which was approved unanimously by combining Government Post Graduate College (Chakwal) and Old Kachehri Balkasar Sub-campuses of University of Engineering and Technology, Taxila.
The University of Chakwal has four faculties; Faculty of Engineering, Faculty of Basic Sciences, Faculty of Arts and Faculty of Computer Sciences & IT consisting of 21 departments. The existing batch 2015 of Electronics and Mechatronics Engineering studying under the University of Chakwal have been accredited by Pakistan Engineering Council under level II (Washington Accord).
[Item #19].
The Faculty of Engineering of the University of Chakwal will offer Bachelor's of Engineering in Electronics, Mechatronics and Petroleum & Gas, while Faculties of Arts, Basic Sciences and Computer Sciences and Information Technology offer degree programs in various disciplines.
The university was originally planned to be called North Punjab University with an estimated budget of Rs. 1000 Million.
The name of University of Chakwal was previously allocated to a private institute that was established in 2015. The university was planned to offered 4-years Bachelor's degree in basic sciences, social sciences, management sciences and computer sciences. It was a project by the Horizon College Chakwal. It was the Chakwal's first private university located at the main Bhaun Road in Chakwal having four faculties; Faculty of Natural Sciences, Faculty of Arts and Social Sciences, Faculty of Management Sciences and Faculty of Computer Sciences which consisted of 18 departments.
In May 2017, Higher Education Commission cancelled the No Objective Certificate (NOC) granted to Private University of Chakwal.[1] [2]
In May 2017, Higher Education Commission cancelled the No Objective Certificate (NOC) granted to Private University of Chakwal.
On 17 January 2020, University of Chakwal Act 2019 came into force establishing Public sector University of Chakwal.
Faculties
References
External links
UOC official website
Public universities and colleges in Punjab, Pakistan
2020 establishments in Pakistan
Educational institutions established in 2020
Chakwal District |
61642897 | https://en.wikipedia.org/wiki/MicroG | MicroG | MicroG (typically styled as microG) is a free and open-source implementation of proprietary Google libraries that serves as a replacement for Google Play Services on the Android operating system. It is maintained by German developer Marvin Wißfeld. MicroG allows mobile device users to access Google mobile services with less tracking of their device activity compared to Google Play Services. In a presentation, Wißfeld described microG as "the framework (libraries, services, patches) to create a fully-compatible Android distribution without any proprietary Google components".
Background
Although Google initially released the Android operating system as open-source software in 2007, the company gradually replaced some of Android's open-source components with proprietary software as Android grew in popularity. Marvin Wißfeld, a German software developer, created the NOGAPPS project in 2012 as a free and open-source drop-in replacement for Google Play Services, Google's closed-source system software that has been pre-installed on almost all Android devices. The NOGAPPS project became MicroG by 2016.
Features
MicroG allows Android apps to access replica application programming interfaces (APIs) that are provided by Google Play Services, including the APIs associated with Google Play, Google Maps, and Google's geolocation and messaging features. Unlike Google Play Services, MicroG does not track user activity on the device, and users can selectively enable and disable specific API features.
LineageOS for MicroG
In 2017, microG released "LineageOS for microG", a fork of LineageOS – a free and open-source Android-based operating system – that includes both MicroG and the F-Droid app store as pre-installed software. LineageOS for MicroG was created after LineageOS developers declined to integrate MicroG into LineageOS; the developers cited MicroG's need to spoof code signatures as a security concern. To enable MicroG's functionality, LineageOS for MicroG includes limited support for signature spoofing.
MicroG developers claim that older smartphones consume less battery power using LineageOS for MicroG compared to operating systems that use Google Play Services. LineageOS for MicroG supported 39 device models in 2017, and now supports the same device models as LineageOS. Devices receive newer versions of LineageOS for MicroG through weekly over-the-air updates.
Adoption
For a 2018 paper on Android app privacy, security researchers from Nagoya University used MicroG to bypass Google's SafetyNet security mechanism on an Android Marshmallow emulator. The researchers altered Android's package manager and implemented signature spoofing to enable MicroG on the emulator.
The /e/ operating system, a privacy-oriented fork of LineageOS, includes MicroG instead of Google Play Services. In 2019, /e/ began selling refurbished smartphones with MicroG pre-installed.
Essential Products' "Project Gem" smartphone, previously in development, used a fork of Android that eschews Google Play Services in favor of MicroG, according to Essential's commits to the Android codebase in late 2019. Essential Products shut down in February 2020.
In 2020, OmniROM began providing builds including MicroG built in for certain devices.
Reception
In 2016, Nathan Willis of LWN.net expected MicroG to be a "welcome addition" for users of alternative Android-based projects, including CyanogenMod, Replicant, and Blackphone. Willis suggested that MicroG could increase its adoption by collaborating with these projects.
Corbin Davenport, writing for Android Police in April 2018, installed LineageOS for MicroG on a Xiaomi Mi 4c smartphone using the Team Win Recovery Project image in an experiment in which he exclusively used open-source software on Android. Davenport was unable to log in to his Google Account through MicroG and concluded that "Going all open-source isn't feasible", despite the high quality of some open-source Android apps from F-Droid. Lifehackers Brendan Hesse recommended MicroG in his November 2018 tutorial to "quitting Google". Hesse saw MicroG as a "promising" alternative to Google Play Services that was "incomplete and still in development", but said that it was "usable" and "runs pretty well".
Steven J. Vaughan-Nichols, in a 2019 ZDNet review of a refurbished Samsung Galaxy S9+ smartphone from /e/, determined that applications which were more closely integrated with Google mobile services were less likely to function properly with ΜicroG. During his device test, Vaughan-Nichols was able to use Signal, Telegram, Facebook, and other Android apps with no problems, while Lyft and Uber operated less reliably; Vaughan-Nichols was not able to run Google Maps or Twitter at all, concluding, "applications can be a pain" and "installing /e/ is a monster of a job."
References
External links
LineageOS for microG
LineageOS for microG repository on GitHub
Custom Android firmware
Free mobile software
Software forks
Android forks
Free computer libraries |
22580 | https://en.wikipedia.org/wiki/OSGi | OSGi | The OSGi Alliance (formerly known as the Open Services Gateway initiative) is an open standards organization for computer software founded in March 1999. They originally specified and continue to maintain the OSGi standard. The OSGi specification describes a modular system and a service platform for the Java programming language that implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments.
Description
Applications or components, coming in the form of bundles for deployment, can be remotely installed, started, stopped, updated, and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management is implemented via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.
The OSGi specifications have evolved beyond the original focus of service gateways, and are now used in applications ranging from mobile phones to the open-source Eclipse IDE. Other application areas include automobiles, industrial automation, building automation, PDAs, grid computing, entertainment, fleet management and application servers.
In October 2020, the OSGi Alliance announced the transition of the standardization effort to the Eclipse Foundation, subsequent to which it would shut down.
Specification process
The OSGi specification is developed by the members in an open process and made available to the public free of charge under the OSGi Specification License. The OSGi Alliance has a compliance program that is open to members only. As of November 2010, there are seven certified OSGi framework implementations. A separate page lists both certified and non-certified OSGi Specification Implementations, which include OSGi frameworks and other OSGi specifications.
Architecture
OSGi is a Java framework for developing and deploying modular software programs and libraries. Each bundle is a tightly coupled, dynamically loadable collection of classes, jars, and configuration files that explicitly declare their external dependencies (if any).
The framework is conceptually divided into the following areas:
BundlesBundles are normal JAR components with extra manifest headers.
ServicesThe services layer connects bundles in a dynamic way by offering a publish-find-bind model for plain old Java interfaces (POJIs) or plain old Java objects (POJOs).
Services RegistryThe application programming interface for management services.
Life-CycleThe application programming interface for life cycle management (install, start, stop, update, and uninstall) for bundles.
ModulesThe layer that defines encapsulation and declaration of dependencies (how a bundle can import and export code).
SecurityThe layer that handles the security aspects by limiting bundle functionality to pre-defined capabilities.
Execution EnvironmentDefines what methods and classes are available in a specific platform. There is no fixed list of execution environments, since it is subject to change as the Java Community Process creates new versions and editions of Java. However, the following set is currently supported by most OSGi implementations:
CDC-1.0/Foundation-1.0
CDC-1.1/Foundation-1.1
OSGi/Minimum-1.0
OSGi/Minimum-1.1
JRE-1.1
From J2SE-1.2 up to J2SE-1.6
Bundles
A bundle is a group of Java classes and additional resources equipped with a detailed manifest MANIFEST.MF file on all its contents, as well as additional services needed to give the included group of Java classes more sophisticated behaviors, to the extent of deeming the entire aggregate a component.
Below is an example of a typical MANIFEST.MF file with OSGi Headers:
Bundle-Name: Hello World
Bundle-SymbolicName: org.wikipedia.helloworld
Bundle-Description: A Hello World bundle
Bundle-ManifestVersion: 2
Bundle-Version: 1.0.0
Bundle-Activator: org.wikipedia.Activator
Export-Package: org.wikipedia.helloworld;version="1.0.0"
Import-Package: org.osgi.framework;version="1.3.0"
The meaning of the contents in the example is as follows:
Bundle-Name: Defines a human-readable name for this bundle, Simply assigns a short name to the bundle.
Bundle-SymbolicName: The only required header, this entry specifies a unique identifier for a bundle, based on the reverse domain name convention (used also by the java packages).
Bundle-Description: A description of the bundle's functionality.
Bundle-ManifestVersion: Indicates the OSGi specification to use for reading this bundle.
Bundle-Version: Designates a version number to the bundle.
Bundle-Activator: Indicates the class name to be invoked once a bundle is activated.
Export-Package: Expresses which Java packages contained in a bundle will be made available to the outside world.
Import-Package: Indicates which Java packages will be required from the outside world to fulfill the dependencies needed in a bundle.
Life-cycle
A Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The life cycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture.
Below is an example of a typical Java class implementing the BundleActivator interface:
package org.wikipedia;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class Activator implements BundleActivator {
private BundleContext context;
@Override
public void start(BundleContext context) throws Exception {
System.out.println("Starting: Hello World");
this.context = context;
}
@Override
public void stop(BundleContext context) throws Exception {
System.out.println("Stopping: Goodbye Cruel World");
this.context = null;
}
}
Services
Standard services
The OSGi Alliance has specified many services. Services are specified by a Java interface. Bundles can implement this interface and register the service with the Service Registry. Clients of the service can find it in the registry, or react to it when it appears or disappears.
The table below shows a description of OSGi System Services:
The table below shows a description of OSGi Protocol Services:
The table below shows a description of OSGi Miscellaneous Services:
Organization
The OSGi Alliance was founded by Ericsson, IBM, Motorola, Sun Microsystems and others in March 1999. Before incorporating as a nonprofit corporation, it was called the Connected Alliance.
Among its members are () more than 35 companies from quite different business areas, for example Adobe Systems, Deutsche Telekom, Hitachi, IBM, Liferay, Makewave, NEC, NTT, Oracle, Orange S.A., ProSyst, Salesforce.com, Siemens, Software AG and TIBCO Software.
The Alliance has a board of directors that provides the organization's overall governance. OSGi officers have various roles and responsibilities in supporting the alliance. Technical work is conducted within Expert Groups (EGs) chartered by the board of directors, and non-technical work is conducted in various working groups and committees. The technical work conducted within Expert Groups include developing specifications, reference implementations, and compliance tests. These Expert Groups have produced five major releases of the OSGi specifications ().
Dedicated Expert Groups exist for the enterprise, mobile, vehicle and the core platform areas.
The Enterprise Expert Group (EEG) is the newest EG and is addressing Enterprise / Server-side applications.
In November 2007 the Residential Expert Group (REG) started to work on specifications to remotely manage residential/home-gateways.
In October 2003, Nokia, Motorola, IBM, ProSyst and other OSGi members formed a Mobile Expert Group (MEG) that will specify a MIDP-based service platform for the next generation of smart mobile phones, addressing some of the needs that CLDC cannot manage - other than CDC. MEG became part of OSGi as with R4.
Specification versions
OSGi Release 1 (R1): May 2000
OSGi Release 2 (R2): October 2001
OSGi Release 3 (R3): March 2003
OSGi Release 4 (R4): October 2005 / September 2006
Core Specification (R4 Core): October 2005
Mobile Specification (R4 Mobile / JSR-232): September 2006
OSGi Release 4.1 (R4.1): May 2007 (AKA JSR-291)
OSGi Release 4.2 (R4.2): September 2009
Enterprise Specification (R4.2): March 2010
OSGi Release 4.3 (R4.3): April 2011
Core: April 2011
Compendium and Residential: May 2012
OSGi Release 5 (R5): June 2012
Core and Enterprise: June 2012
OSGi Release 6 (R6): June 2015
Core: June 2015
OSGi Release 7 (R7): April 2018
Core and Compendium: April 2018
OSGi Release 8 (R8): December 2020
Related standards
MHP / OCAP
Universal Plug and Play (UPnP)
DPWS
ITU-T G.hn
LonWorks
CORBA
CEBus
EHS (KNX) / CECED CHAIN
Java Management Extensions
Projects using OSGi
Adobe Experience Manager - an enterprise Content Management System
Apache Aries - Blueprint Container implementations and extensions of application-focused specifications defined by OSGi Enterprise Expert Group
Apache Sling - OSGi-based applications layer for JCR content repositories
Atlassian Confluence and JIRA - the plug-in architecture for this enterprise wiki and issue tracker uses OSGi
Business Intelligence and Reporting Tools (BIRT) Project - Open source reporting engine
Cytoscape - an open source bioinformatics software platform (as of version 3.0)
DataNucleus - open source data services and persistence platform in service-oriented architectures
DDF - Distributed Data Framework provides free and open-source data integration
Dotcms - open source Web Content Management
EasyBeans - open source EJB 3 container
Eclipse - open source IDE and rich client platform
iDempiere - is an OSGi implementation of the open source ERP Branch GlobalQSS Adempiere361 originally started by Low Heng Sin
Eclipse Virgo - open source microkernel-based server constructed of OSGi bundles and supporting OSGi applications
GlassFish (v3) - application server for Java EE
Fuse ESB - a productized and supported release of ServiceMix 4.
Integrated Genome Browser - an open source, desktop GUI for visualizing, exploring, and analyzing genome data
IntelliJ - Java IDE and rich client platform with free community edition
JBoss - Red Hat's JBoss Application Server
JOnAS 5 - open source Java EE 5 application server
Joram - open source messaging server (JMS, MQTT, AMQP, etc.)
JOSSO 2 - Atricore's open source standards-based Identity and Access Management Platform
Liferay Dxp - open source and commercial enterprise Portal platform use OSGi from version 7.x.
Lucee 5 - open source CFML Web Application Server
NetBeans - open source IDE and rich client platform
Nuxeo - open source ECM Service Platform
Open Daylight Project - Project with the goal of accelerating the adoption of software-defined networking
OpenEJB - open source OSGi-enabled EJB 3.0 container that can be run both in standalone or embedded mode
openHAB - open source home automation software
OpenWorm - open source software simulation of C. elegans, via the dedicated Geppetto modular platform
Akana - API Gateway, Portal and Analytics server from Akana (formerly SOA Software)
SpringSource dm Server - open source microkernel-based server constructed of OSGi bundles and supporting OSGi applications
Weblogic - Oracle Weblogic Application Server
WebSphere - IBM Websphere JEE Application Server
WebMethods - SoftwareAG WebMethods
WSO2 Carbon - Base platform for WSO2's enterprise-grade Open source middleware stack
Current framework implementations
See also
OSGi Specification Implementations
References
Further reading
External links
Oredev 2008 - Architecture - OSGi Now and Tomorrow
Eclipse Equinox Article Index - Articles on an open source OSGi implementation
Standards organizations in the United States
Articles with example Java code
Free software programmed in Java (programming language)
1999 establishments in the United States
Embedded systems
Organizations based in California |
24527102 | https://en.wikipedia.org/wiki/Sescoi | Sescoi | Sescoi is a developer of industrial software for computer-aided manufacturing, enterprise resource planning and extended enterprise productivity. Its WorkNC software is one of the market leaders in the CAD/CAM field and is used by more than 25% of companies in demanding countries such as Japan. Sescoi also develops WorkPLAN, a range of ERP software products for custom manufacturers and project based companies. As of 2011 Sescoi had more than 5000 customers and 11000 licenses sold worldwide. Sescoi and its products were acquired by Vero Software in January 2013.
History
Sescoi was created by Bruno Marko in 1987. The company name comes from the French acronym "Société Européenne Spécialisée en Communication Organisation et Informatique". Sescoi was an early pioneer in the development of 3D CAM software with the launch of WorkNC in 1988. WorkNC is well known for its focus on automation and ease of use.
In 1992 the company launched WorkPLAN, its first ERP software for custom manufacturing and project management.
Sescoi acquired Xitron in 2001 and Mecasoft Industrie, developer of SolidConcept, in 2002.
The company launched WorkNC-CAD in 2002, WorkNC 5-axis in 2003 and WorkNC G3 in 2007.
A new generation of WorkPLAN ERP was developed and launched as two complementary products. The first one is MyWorkPLAN, a modular ERP for project management, launched in 2006. The second is WorkPLAN Enterprise, a full ERP software for custom manufacturers, mold and die makers and engineering departments, launched in 2008. Both of these products use MySQL as a database engine, and feature a redesigned user interface, with a navigation tree similar to that used in CAD systems.
In 2008 Sescoi launched WorkXPlore 3D, a high-speed collaborative viewer for analyzing and sharing 3D CAD files without requiring the original CAD application.
In year 2009 Sescoi launched WorkNC Dental, CAD/CAM software for automatic machining of prosthetic appliances, implants or dental structures, and also launched WorkNC Wire EDM, a software for wire EDM.
As of 2011, Sescoi had offices in the US, UK, France, Germany, Spain, Japan, India, China and Korea, and more than 50 distributors around the world serving customers in industries such as Automotive, Aerospace and Defense, Engineering, General Mechanical, Medical & Dental, Tooling and Mold & Die.
Sescoi was acquired by the Vero Software Group in January 2013.
Main products
WorkNC - Automatic CAD/CAM software for machining complex parts from 2 to 5 axis.
WorkPLAN Enterprise - Custom manufacturing ERP software
MyWorkPLAN - Project management ERP software
WorkXPlore 3D - High-speed collaborative viewer for analyzing and sharing 3D CAD files
WorkNC Dental - CAD/CAM software for automatic machining of prosthetic appliances, implants or dental structures
Sponsorship activity
Sescoi was involved in sport sponsorship, and was primarily known for having sponsored the Formula 1 team Prost Grand Prix in 1999 and 2000 as well as the German racing driver Catharina Felser.
Sescoi sponsored and worked with Eric Barone, a French sportsman, well known for having beaten the world speed record descending on a bicycle. His record on snow is 222 km/h achieved at Les Arcs, while on soil it is 172 km/h, achieved at the Cerro Negro volcano.
References
External links
Sescoi website
Software companies of France
Companies established in 1987
ERP software companies
Computer-aided design software
Computer-aided manufacturing software
French brands |
39439305 | https://en.wikipedia.org/wiki/Causata | Causata | Causata is an enterprise software company based in San Mateo, California, United States with development offices in London, England, UK. It provides software which is used by business-to-consumer marketers to manage interactions with customers. The company launched the fourth version of its products in February, 2013.
Corporate history
Causata was founded in 2009 by Paul Phillips and opened its first office in London, England, UK. The company opened its San Mateo, California office in 2011. CEO Paul Wahl, a former CEO of SAP America and former COO of Siebel Systems, joined the company in March 2012. Causata has received $23 million in venture funding from Accel Partners London, the Series C round coming in Feb. 2013.
In 2013, Causata was acquired by NICE Systems.
Products
Causata's software uses HBase, the NoSQL database on the Hadoop Distributed File System. It has industry-specific applications for cross-sell, acquisition, and retention to enable marketers to personalize web and mobile interfaces, operate email and mobile messaging campaigns, enable targeted advertising and create context-driven sales and service.
The Identity Graph algorithms use first-party online and offline customer data to combine customers' identities and event timelines.
The Predictive Profile compares individual customer paths and histories against each other to predict customer intent. Machine learning algorithms based on reinforcement methodologies build real-time predictive analytics models for individual customers.
Decisioning serves targeted offers, promotions and ads to potential customers, based on information gathered by the previously mentioned analysis.
References
Software companies based in California
Software companies established in 2009
Software companies of the United States |
17989579 | https://en.wikipedia.org/wiki/Remote%20service%20software | Remote service software | Remote service software is used by equipment manufacturers to remotely monitor, access and repair products in use at customer sites. It is a secure, auditable gateway for service teams to troubleshoot problems, perform proactive maintenance, assist with user operations and monitor performance. This technology is typically implemented in mission-critical environments like hospitals or IT data centers – where equipment downtime is intolerable.
Benefits
Remote service software helps to:
Increase uptime, improve performance and extend the life of a device
Control service costs by deploying patches and upgrades remotely, and ensure a first-time fix when an onsite visit is required
Streamline administration of pay-per-use models, with automated usage monitoring
Focus highly trained service teams on preventative maintenance, by diagnosing and repairing issues before they cause system failure
Increase customer satisfaction and loyalty
Manufacturers are using aftermarket service a competitive differentiator. Remote service software provides a platform for manufacturers to offer and meet stringent service level agreements (SLAs) without increasing the size of their service team.
increase in network
Key characteristics of remote service software
Proactive: Remote monitoring of devices in use allows service teams to detect potential issues before they escalate, degrade performance or cause a system failure. This early warning system is a key component to issue avoidance and ability to meet more stringent key performance indicator (KPIs) in SLAs. Once an issue is detected, service professionals can also use remote service technology gateways to push patches or resolve issues.
Secure: Secure access is a core consideration of remote service. Solutions should adhere to compliance guidelines and protect the remote connection between the manufacturer and the customer – ensuring that no data has been stolen and no outsiders have been granted access.
Auditable: [ SOX, HIPAA and PCI ] compliance regulations require businesses to keep track of who does what on their network. Auditors require forensic logs to trace the steps of every interaction a remote service technician has with every device.
Other names for remote service software
IDM: intelligent device management
SSM: strategic service management
RDM: remote device management
References
Storage That Phones Home, Byte & Switch, April, 2007
Technology systems
Business software |
48080939 | https://en.wikipedia.org/wiki/FLAME%20University | FLAME University | FLAME University is a private, coeducational and fully residential liberal education university located in Pune, India. It was formerly known as FLAME - Foundation for Liberal and Management Education.
Leadership
The academic program at the FLAME University is led by Vice-Chancellor, Dr. Dishan Kamdar. Kamdar was previously the Deputy Dean, Academic Programmes & Professor of Organisational Behaviour at the Indian School of Business, the top ranked global business school in India.
Former Vice-Chancellor Dr. Devi Singh, who stepped down after a term at the helm, continues to play a leadership role as a member of the Governing Body. In the past, Singh served as Director of the Indian Institute of Management Lucknow.
Earlier, as Foundation for Liberal And Management Education institute, it was headed by Prof. Indira Parikh, former Dean of Indian Institute of Management Ahmedabad.
Academic programs
Undergraduate Program -
FLAME University offers both three-year and optional four-year undergraduate programs with the degree nomenclature determined by the major pursued.
Three-year undergraduate degrees and their majors.
B.A. - Economics, Psychology, Literary & Cultural Studies, International Studies, Environmental Studies, Journalism, Public Policy, Sociology.
B.A. (Hons) - Economics***.
B.Sc. - Applied Mathematics, Computer Science, Data Science and Economics*, Computer Science and Design*.
B.Sc. (Hons) - Computer Science***.
BBA - Finance, Business Analytics, Marketing, Human Resource Management, Entrepreneurship, Operations, General Management**, Design Management*.
BBA (Communications Management) - Advertising & Branding, Digital Marketing & Communications, Film & Television Management, Communication Studies**.
Optional four-year undergraduate degrees and their majors. All four-year undergraduate programs lead to honours degrees.
B.A. (Hons) - Economics, Psychology, Literary & Cultural Studies, International Studies, Environmental Studies, Journalism, Public Policy, Sociology.
B.Sc. (Hons) - Applied Mathematics, Data Science and Economics, Computer Science, Computer Science and Design.
BBA (Hons) - Finance, Business Analytics, Marketing, Human Resource Management, Entrepreneurship, Operations, Design Management.
BBA (Communications Management) (Hons)- Advertising & Branding, Digital Marketing & Communications, Film & Television Management.
* Interdisciplinary majors. Not offered as minors. Only offered as majors. No minor combination possible. | ** Only available as a major in the three-year undergraduate program. | *** three-year undergraduate honours program available without minors.
Postgraduate Programs -
FLAME University offers MBA, MBA (Communications Management), Postgraduate Program in Entrepreneurship and Innovation (PGPEI), and M.Sc. (Economics) Postgraduate Programs
Doctoral Program -
Areas offered under the program are Management, Economics, Psychology, and Data Science.
The FLAME Scholars Program -
The one-year FLAME Scholars Program builds on the foundations laid in the undergraduate years. It emphasizes mentorship, research, and participatory learning. After successful completion of the program requirements, the student is awarded a postgraduate diploma.
International collaborations
FLAME University has collaborations with international universities, institutes and research bodies. These include:
Wellesley College: Academic collaboration with Wellesley College, a private, women's liberal arts college, ranked the 3rd best national liberal arts college in the United States in 2017, that facilitates international academic exchange, develop academic and scientific relationships, and support collaborative research activities.
Amherst College
Kelley School of Business, Indiana University Bloomington: Partnership with Kelley School of Business, ranked ninth for its undergraduate business program according to US News & World Report, that includes study abroad programs, collaborative research, faculty exchange, executive education and curriculum development. The partnership is expected to extend to other areas as well.
Yale University: Association with Yale University, an American Ivy League research university, that allows FLAME University students to attend Yale's Summer Session Program.
Babson College: Accepted as a member of the Babson Collaborative, an initiative by Babson College, a private business school in the United States consistently ranked as the number one college in entrepreneurship education for nearly three decades, to bring together educational institutions for the exchange of best practices in entrepreneurship education.
Sciences Po
IE University, Spain
York University, Canada
Boston University, U.S.
Global Liberal Arts Alliance: FLAME University has accepted membership into the Global Liberal Arts Alliance. The Global Liberal Arts Alliance is an association of liberal arts colleges around the world. It was established in 2009. The goal of the consortium is to provide an international framework for cooperation among institutions.
India Global Higher Education Alliance: FLAME is a Founding Member of this Alliance between prestigious autonomous universities in India and outside India. Other institutions include Columbia University, University of Cambridge, University of Hong Kong, Pomona College, Azim Premji University and MIT.
Library of Mistakes: The university has announced its association with the Library of Mistakes, Edinburgh; a charitable venture founded to promote the study of financial history.
Campus
The campus of FLAME University is in Pune, Maharastra. The campus has been designed by Vastu Shilpa Consultants, an architectural practice founded by 2018 Pritzker Laureate B.V. Doshi.
See also
Liberal education
Liberal arts education
List of liberal arts colleges
References
Private universities in India
Universities and colleges in Pune
FLAME University
Educational institutions established in 2015
2015 establishments in Maharashtra |
473948 | https://en.wikipedia.org/wiki/Software%20engineering%20professionalism | Software engineering professionalism | Software engineering professionalism is a movement to make software engineering a profession, with aspects such as degree and certification programs, professional associations, professional ethics, and government licensing. The field is a licensed discipline in Texas in the United States (Texas Board of Professional Engineers, since 2013), Engineers Australia(Course Accreditation since 2001, not Licensing), and many provinces in Canada.
History
In 1993 the IEEE and ACM began a joint effort called JCESEP, which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers. ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing,
and that licensing would give false assurances of competence even if the body of knowledge were mature.
The IEEE continued to support making software engineering a branch of traditional engineering.
In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association.
In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term "software engineering" in the name of a computer science program. Concerns were raised over inappropriate use of the name "software engineering" to describe non-engineering programs could lead to student and public confusion, and ultimately threaten public safety. The Panel issued recommendations to create a Software Engineering Accreditation Board, but the task force created to carry out the recommendations were unable to get the various stakeholders to agree to concrete proposals, resulting in separate accreditation boards.
Ethics
Software engineering ethics is a large field. In some ways it began as an unrealistic attempt to define bugs as unethical. More recently it has been defined as the application of both computer science and engineering philosophy, principles, and practices to the design and development of software systems. Due to this engineering focus and the increased use of software in mission critical and human critical systems, where failure can result in large losses of capital but more importantly lives such as the Therac-25 system, many ethical codes have been developed by a number of societies, associations and organizations. These entities, such as the ACM, IEEE, EGBC and Institute for Certification of Computing Professionals (ICCP) have formal codes of ethics. Adherence to the code of ethics is required as a condition of membership or certification. According to the ICCP, violation of the code can result in revocation of the certificate. Also, all engineering societies require conformance to their ethical codes; violation of the code results in the revocation of the license to practice engineering in the society's jurisdiction.
These codes of ethics usually have much in common. They typically relate the need to act consistently with the client's interest, employer's interest, and most importantly the public's interest. They also outline the need to act with professionalism and to promote an ethical approach to the profession.
A Software Engineering Code of Ethics has been approved by the ACM and the IEEE-CS as the standard for teaching and practicing software engineering.
Examples of codes of conduct
The following are examples of codes of conduct for Professional Engineers. These 2 have been chosen because both jurisdictions have a designation for Professional Software Engineers.
Engineers and Geoscientists of British Columbia (EGBC): All members in the association's code of Ethics must ensure that government, the public can rely on BC's professional engineers and Geoscientists to act at all times with fairness, courtesy and good faith to their employers, employee and customers, and to uphold the truth, honesty and trustworthiness, and to safe guard human life and the environment. This is just one of the many ways in which BC's Professional Engineers and Professional Geoscientists maintain their competitive edge in today's global marketplace.
Association of Professional Engineers and Geoscientists of Alberta (APEGA): Different with British Columbia, the Alberta Government granted self governance to engineers, Geoscientists and geophysicists. All members in the APEGA have to accept legal and ethical responsibility for the work and to hold the interest of the public and society. The APEGA is a standards guideline of professional practice to uphold the protection of public interest for engineering, Geoscientists and geophysics in Alberta.
Opinions on ethics
Bill Joy argued that "better software" can only enable its privileged end users, make reality more power-pointy as opposed to more humane, and ultimately run away with itself so that "the future doesn't need us." He openly questioned the goals of software engineering in this respect, asking why it isn't trying to be more ethical rather than more efficient. In his book Code and Other Laws of Cyberspace, Lawrence Lessig argues that computer code can regulate conduct in much the same way as the legal code. Lessig and Joy urge people to think about the consequences of the software being developed, not only in a functional way, but also in how it affects the public and society as a whole.
Overall, due to the youth of software engineering, many of the ethical codes and values have been borrowed from other fields, such as mechanical and civil engineering. However, there are many ethical questions that even these, much older, disciplines have not encountered. Questions about the ethical impact of internet applications, which have a global reach, have never been encountered until recently and other ethical questions are still to be encountered. This means the ethical codes for software engineering are a work in progress, that will change and update as more questions arise.
Independent licensing and certification exams
Since 2002, the IEEE Computer Society offered the Certified Software Development Professional (CSDP) certification exam (in 2015 this was replaced by several similar certifications). A group of experts from industry and academia developed the exam and maintained it. Donald Bagert, and at later period Stephen Tockey headed the certification committee. Contents of the exam centered around the SWEBOK (Software Engineering Body of Knowledge) guide, with the additional emphasis on Professional Practices and Software Engineering Economics knowledge areas (KAs). The motivation was to produce a structure at an international level for software engineering's knowledge areas.
Criticism of licensing
Professional licensing has been criticized for many reasons.
The field of software engineering is too immature
Licensing would give false assurances of competence even if the body of knowledge were mature
Software engineers would have to study years of calculus, physics, and chemistry to pass the exams, which is irrelevant to most software practitioners. Many (most?) computer science majors don't earn degrees in engineering schools, so they are probably unqualified to pass engineering exams.
Licensing by country
United States
The Bureau of Labor Statistics (BLS) classifies computer software engineers as a subcategory of "computer specialists", along with occupations such as computer scientist, Programmer, Database administrator and Network administrator. The BLS classifies all other engineering disciplines, including computer hardware engineers, as engineers.
Many states prohibit unlicensed persons from calling themselves an Engineer, or from indicating branches or specialties not covered licensing acts. In many states, the title Engineer is reserved for individuals with a Professional Engineering license indicating that they have shown minimum level of competency through accredited engineering education, qualified engineering experience, and engineering board's examinations.
In April 2013 the National Council of Examiners for Engineering and Surveying (NCEES) began offering a Professional Engineer (PE) exam for Software Engineering. The exam was developed in association with the IEEE Computer Society. NCEES ended the exam in April 2019 due to lack of participation.
The American National Society of Professional Engineers provides a model law and lobbies legislatures to adopt occupational licensing regulations. The model law requires:
a four-year degree from a university program accredited by the Engineering Accreditation Committee (EAC) of the Accreditation Board for Engineering and Technology (ABET),
an eight-hour examination on the fundamentals of engineering (FE) usually taken in the senior year of college,
four years of acceptable experience,
a second examination on principles and practice, and
written recommendations from other professional engineers.
Some states require continuing education.
In Texas Donald Bagert of Texas became the first professional software engineer in the U.S. on September 4, 1998 or October 9, 1998. As of May 2002, Texas had issued 44 professional engineering licenses for software engineers. Rochester Institute of Technology granted the first Software Engineering bachelor's degrees in 2001. Other universities have followed.
Canada
In Canada, the use of the job title Engineer is controlled in each province by self-regulating professional engineering organizations who are also tasked with enforcement of the governing legislation. The intent is that any individual holding themselves out as an engineer has been verified to have been educated to a certain accredited level and their professional practice is subject to a code of ethics and peer scrutiny. It is also illegal to use the title Engineer in Canada unless an individual is licensed.
IT professionals with degrees in other fields (such as computer science or information systems) are restricted from using the title Software Engineer, or wording Software Engineer in a title, depending on their province or territory of residence.
In some instances, cases have been taken to court regarding the illegal use of the protected title Engineer.
Most Canadians who earn professional software engineering licenses study software engineering, computer engineering or electrical engineering. Many times these people already qualified to become professional engineers in their own fields but choose to be licensed as software engineers to differentiate themselves from computer scientists.
In British Columbia, The Limited Licence is granted by the Engineers and Geoscientists of British Columbia. Fees are collected by EGBC for the Limited Licensee.
Ontario
In Ontario, the Professional Engineers Act stipulates a minimum education level of a three-year diploma in technology from a College of Applied Arts and Technology or a degree in a relevant science area. However, engineering undergraduates and all other applicants are not allowed to use the title of engineer until they complete the minimum amount of work experience of four years in addition to completing the Professional Practice Examination (PPE). If the applicant does not hold an undergraduate engineering degree then they may have to take the Confirmatory Practice Exam or Specific Examination Program unless the exam requirements are waived by a committee.
A person must be granted the “professional engineer” licence to have the right to practise professional software engineering as a Professional Engineer in Ontario.
To become licensed by Professional Engineers Ontario (PEO), one must:
Be at least 18 years of age.
Be a citizen or permanent resident of Canada.
Be of good character. Applicants will be requested to answer questions and make a written declaration on the form as a test of ethics.
Meet PEO's stipulated academic requirements for licensure.
Pass the Professional Practice Examination.
Fulfill engineering work experience requirements.
Many graduates of Software Engineering programs are unable to obtain the PEO licence since the work they qualify for after graduation as entry-level is not related to engineering i.e. working in a software company writing code or testing code would not qualify them as their work experience does not fulfill the work experience guidelines the PEO sets. Also Software Engineering programs in Ontario and other provinces involve a series of courses in electrical, electronics, and computers engineering qualifying the graduates to even work in those fields.
Quebec
A person must be granted the “engineer” licence to have the right to practise professional software engineering in Quebec. To become licensed by the Quebec order of engineers (in French : Ordre des ingénieurs du Québec - OIQ), a candidate must:
Be at least 18 years of age.
Be of good character. The candidate will be requested to answer questions and make a written declaration on the application form to test their ethics.
Meet OIQ's stipulated academic requirements for licensure. In this case, the academic program should be accredited by the Canadian Engineering Accreditation Board - CEAB)
Pass the Professional Practice Examination.
Fulfill engineering work experience requirements.
Pass the working knowledge of French exam
Software engineering (SEng) guidelines by Canadian provinces
The term "engineer" in Canada is restricted to those who have graduated from a qualifying engineering programme. Some universities’ "software engineering" programmes are under the engineering faculty and therefore qualify, for example the University of Waterloo. Others, such as the University of Toronto have "software engineering" in the computer science faculty which does not qualify. This distinction has to do with the way the profession is regulated. Degrees in "Engineering" must be accredited by a national panel and have certain specific requirements to allow the graduate to pursue a career as a professional engineer. "Computer Science" degrees, even those with specialties in software engineering, do not have to meet these requirements so the computer science departments can generally teach a wider variety of topics and students can graduate without specific courses required to pursue a career as a professional engineer.
Europe
Throughout the whole of Europe, suitably qualified engineers may obtain the professional European Engineer qualification.
France
In France, the term (engineer) is not a protected title and can be used by anyone, even by those who do not possess an academic degree.
However, the title (Graduate Engineer) is an official academic title that is protected by the government and is associated with the , which is one of the most prestigious academic degrees in France.
Iceland
The use of the title (computer scientist) is protected by law in Iceland. Software engineering is taught in Computer Science departments in Icelandic universities. Icelandic law state that a permission must be obtained from the Minister of Industry when the degree was awarded abroad, prior to use of the title. The title is awarded to those who have obtained a BSc degree in Computer Science from a recognized higher educational institution.
New Zealand
In New Zealand, the Institution of Professional Engineers New Zealand (IPENZ), which licenses and regulates the country's chartered engineers (CPEng), recognizes software engineering as a legitimate branch of professional engineering and accepts application of software engineers to obtain chartered status provided they have a tertiary degree of approved subjects. Software Engineering is included whereas Computer Science is normally not.
See also
Bachelor of Science in Information Technology
Bachelor of Software Engineering
List of software engineering topics
Software engineering demographics
Software engineering economics
References
External links
Professional licensing in Texas
SE As Code of Ethics
"A Review of the Professionalization of the Software Industry: Has it Made Software Engineering a Real Profession?", An academic article documenting the progress of SE professionalization
Software engineering |
16621067 | https://en.wikipedia.org/wiki/Adobe%20Photoshop%20Express | Adobe Photoshop Express | Adobe Photoshop Express is a free image editing and collage making mobile application from Adobe Inc. The app is available on iOS, Android and Windows phones and tablets. It can also be installed on Windows desktop with Windows 8 and above, via the Microsoft Store. Photoshop Express Editor has various features that can be used to enhance photos.
In November 2016 Collage creation was introduced to Adobe Photoshop Express on iOS. They allow editing pictures in the smartphone or tablet rather than online. It can be used to showcase your latest art, ideas, or products.
Features
Mobile users can perform a range of basic image editing functions, adjust aspects such as contrast and exposure, correct perspective, remove blemishes and add text. Users can also choose from a range of dynamic effects such as Nature, Black and White, Colored and Portrait.
Photoshop Express is a freemium product, and more advanced features require a paid subscription.
Terms of use
In the early days of the beta test, the product's terms of use raised controversy, in that it claimed to retain an irrevocable license to use certain works submitted by end users in perpetuity. At that time, Adobe held users of Photoshop Express to Adobe.com's general Terms of Use, which had not been drafted in contemplation of the sort of user-created content utilized in Photoshop Express.
Following user concerns and negative press, Adobe issued new, more specialized Terms of Use for the Photoshop Express product that superseded sections of the General Terms, and clarified many of these issues. Changes included making the license expressly revocable and indicating that Adobe's rights to use the content are solely for the operation of Photoshop Express itself.
See also
Adobe Photoshop Elements
References
External links
Adobe Photoshop Express
Free Online Photoshop
Photo software
Express
Android (operating system) software
IOS software
Windows Phone software
Universal Windows Platform apps
Windows graphics-related software
Proprietary cross-platform software |
57508020 | https://en.wikipedia.org/wiki/GeckoLinux | GeckoLinux | GeckoLinux is a Linux distribution based on openSUSE. It is available in two editions: Static, which is based on openSUSE Leap, and Rolling, which is based on openSUSE Tumbleweed.
Features
GeckoLinux's ISOs are based on openSUSE's images (around 1 GB). Different editions include different desktop environments, installed open-source programs, and ready-to-work proprietary programs and/or drivers. GeckoLinux also comes with font rendering through open-source software and pre-installed TLP for power management. It is open-source, available in public repositories.
Differences from openSUSE
GeckoLinux has an offline Live-USB/DVD installer for both Static and Rolling editions. It comes with pre-installed programs and downloaded applications packages from the Packman repository. Since it is a pre-configured distribution, desktop programs can be deleted with data and dependencies, and it does not offer installation of some additional packages after its OS installs.
Version history
Static
Rolling
Reception
Dedoimedo wrote a non-positive review of GeckoLinux 150.180616, saying that it had "issues with multimedia playback, visual glitches and the graphics driver". A subsequent review of GeckoLinux Static 152 in February 2021 was more positive.
Jack M. Germain explains how & why "GeckoLinux doing for the OpenSuse/Suse world much of what Linux Mint did for the Ubuntu universe".
See also
openSUSE
Rolling release
References
External links
RPM-based Linux distributions
x86-64 Linux distributions
Linux distributions |
5029696 | https://en.wikipedia.org/wiki/Sex%20Packets | Sex Packets | Sex Packets is the debut studio album by American hip hop group Digital Underground, released on .
Album background
The album is a concept album about "G.S.R.A." (Genetic Suppression Relief Antidotes), a pharmaceutical substance that is produced in the form of a large glowing pill about the size of a quarter, which comes in a condom-sized package and is allegedly developed by the government to provide its intended users such as astronauts with a satisfying sexual experience in situations where the normal attainment of such experiences would be counter-productive to the mission at hand.
Release and reception
The album was released in the spring of 1990 following the success of its two lead-off singles: "Doowutchyalike", a moderate club hit, followed by "The Humpty Dance", which reached No. 11 on the pop chart, No. 7 on the R&B chart, and No. 1 on the Billboard Rap Singles chart. Sex Packets was released to positive reviews and eventually achieved platinum sales. The album was re-issued on February 8, 2005 by Rhino Entertainment. The album is broken down track-by-track by Digital Underground in Brian Coleman's book Check the Technique.
Legacy
In 1998, the album was selected as one of The Sources 100 Best Rap Albums Ever. It is included in the book 1001 Albums You Must Hear Before You Die.
Track listing
CD
LP
Cassette
The cassette version of the album has 3 extra tracks, plus an extended version of "Gutfest '89"
Samples
"Underwater Rimes (Remix)"
"Chameleon" by Herbie Hancock
"Aqua Boogie (A Psychoalphadiscobetabioaquadoloop)" by Parliament
"Paul Revere" by Beastie Boys
"Doowutchyalike"
"I Get Lifted" by KC & the Sunshine Band
"Flash Light" by Parliament
"Bounce, Rock, Skate, Roll" by Vaughan Mason & Crew
"Good Times" by Chic
"Agony of Defeet" by Parliament
"Sexuality" by Prince
"Ain't No Half Steppin'" by Big Daddy Kane
"Atomic Dog" by George Clinton
"Keep Risin' to the Top" by Doug E. Fresh
"Packet Man"
"Four Play" by Fred Wesley and The Horny Horns
"The Humpty Dance"
"Sing a Simple Song" by Sly & the Family Stone
"Humpty Dump" by The Vibrettes
"Theme from the Black Hole" by Parliament
"Let's Play House" by Parliament
"Freaks of the Industry"
"Love to Love You Baby" by Donna Summer
"Gutfest '89"
"Theme From the Planets" by Dexter Wansel
"Rhymin' on the Funk"
"Flash Light" by Parliament
"Sex Packets"
"Dr. Funkenstein (Live)" by Parliament
”The Motor-Booty Affair” by Parliament
"She's Always in My Hair" by Prince
"The Danger Zone"
"Flash Light" by Parliament
"Bootzilla" by Bootsy's Rubber Band
"You're a Customer" by EPMD
"The Way We Swing"
"Who Knows" by Jimi Hendrix
Charts
Weekly charts
Year-end charts
Certifications
References
External links
Sex Packets (Adobe Flash) at Radio3Net (streamed copy where licensed)
Sex Packets (US Release) (Adobe Flash) at Myspace (streamed copy where licensed)
1990 debut albums
Digital Underground albums
Tommy Boy Records albums
Concept albums |
22521251 | https://en.wikipedia.org/wiki/List%20of%203D%20computer%20graphics%20software | List of 3D computer graphics software | This list of 3D graphics software contains software packages related to the development and exploitation of 3D computer graphics. For a comparison see Comparison of 3D computer graphics software.
0-9
123D is Autodesk's entry into the hobbyist 3D modelling market.
3D-Coat is a digital sculpting program specializing in voxel and polygonal sculpting.
3Delight is a RenderMan-compliant renderer.
3DCrafter (previously known as 3D Canvas) is a 3D modelling and animation tool available in a freeware version, as well as paid versions (3D Canvas Plus and 3D Canvas Pro).
3ds Max (Autodesk), originally called 3D Studio MAX, is a comprehensive and versatile 3D application used in film, television, video games and architecture for Windows and Apple Macintosh (but only running via Parallels or other VM software). It can be extended and customized through its SDK or scripting using a Maxscript. It can use third party rendering options such as Brazil R/S, finalRender and V-Ray.
A
AC3D (Inivis) is a 3D modeling application that began in the 1990s on the Amiga platform. Used in a number of industries, MathWorks actively recommends it in many of their aerospace-related articles due to price and compatibility. AC3D does not feature its own renderer, but can generate output files for both RenderMan and POV-Ray among others.
Adobe After Effects (Adobe Systems) is a digital visual effects, motion graphics, and compositing application used in the post-production process of film making and television production.
Aladdin4D (DiscreetFX), first created for the Amiga, was originally developed by Adspec Programming. After acquisition by DiscreetFX, it is multi-platform for OS X, Amiga OS 4.1, MorphOS, Linux, AROS and Windows.
Amiga Reflections is a 3D modeling and rendering software developed by Carsten Fuchs for the Commodore Amiga.
Anim8or is a proprietary freeware 3D rendering and animation package.
Animation:Master from HASH, Inc is a modeling and animation package that focuses on ease of use. It is a spline-based modeler. Its strength lies in character animation.
Aqsis is a free and open source rendering suite compliant with the RenderMan standard.
AutoQ3D Community is not a professional CAD program and it is focused to beginners who want to make rapid 3D designs. It is a free software package developed under the GPL. It has a commercial sibling called AutoQ3D CAD.
B
Blender (Blender Foundation) is a free, open source, 3D studio for animation, modeling, rendering, and texturing offering a feature set comparable to commercial 3D animation suites. It is developed under the GPL and is available on all major platforms including Windows, OS X, Linux, BSD, and Solaris.
BRL-CAD is an open source solid modeling computer-aided design system with interactive geometry editing, ray-tracing for rendering and geometric analysis, network-distributed framebuffer display support, and image and signal processing tools. It supports a myriad of geometry representation types including boundary representation (BREP) NURBS, triangle and polygonal models, volumetric data visualization, and efficient implicit constructive solid geometry (CSG). It's been under continuous active development since 1984.
Brazil is a rendering engine for 3ds Max, Rhino and VIZ
Bryce (DAZ Productions) is most famous for landscapes and creating 'painterly' renderings, as well as its unique user interface. Daz3d has stopped its development and it is not compatible with MAC OS 10.7x or higher. It is currently being given away for free via the daz3d website.
C
Carrara (Daz Productions) is a 3D complete tool set package for 3D modeling, texturing animation and rendering.
CATIA is CAD/CAE/CAM software from Dassault Systèmes for 3D modeling, texturing, animation and rendering.
Cheetah3D is a proprietary program for Apple Macintosh computers primarily aimed at amateur 3D artists with some medium- and high-end features
Cinema 4D (MAXON) is a light (Prime) to full featured (Studio) 3d package dependent on version used. Although used in film usually for 2.5d work, Cinema's largest user base is in the television motion graphics and design/visualisation arenas. Originally developed for the Amiga, it is also available for OS X and Windows.
CityEngine (Procedural Inc) is a 3D modeling application specialized in the generation of three-dimensional urban environments. With the procedural modeling approach, CityEngine enables the efficient creation of detailed large-scale 3D city models, it is available for OS X, Windows and Linux.
Clara.io (Exocortex) is a 3D polygon modeling, solid constructive geometry, animation, and rendering package that runs in any web browser that supports WebGL. It supports collaborative real-time editing and revision control. It supports format conversions as well to popular web-formats including Three.js, and Babylon.js.
CloudCompare is an open source 3D point cloud editing and processing software.
Cobalt is a parametric-based Computer-aided design (CAD) and 3D modeling software for both the Macintosh and Microsoft Windows. It integrates wireframe, freeform surfacing, feature-based solid modeling and photo-realistic rendering (see Ray tracing), and animation.
D
Daz Studio a free 3D rendering tool set for adjusting parameters of pre-existing models, posing and rendering them in full 3D scene environments. Imports objects created in Poser and is similar to that program, but with fewer features.
DX Studio a complete integrated development environment for creating interactive 3D graphics. The system comprises both a real-time 3D engine and a suite of editing tools, and is the first product to offer a complete range of tools in a single IDE.
E
Electric Image Animation System (EIAS3D) is a 3D animation and rendering package available on both OS X and Windows. Mostly known for its rendering quality and rendering speed it does not include a built-in modeler. The popular film Pirates of the Caribbean and the television series Lost used the software.
Element 3D is third party after effects plugin developed by Video Copilot.[2] it is a 64 bit plugin that run in both Win/Mac os. It is used for importing/creating 3d objects and for particle rendering. It is mostly used for motion design and visual effects. It can do a professional work in much less time according to its simple interface and presets, that it was designed for easier workflow, user friendly and final results.
F
FaceGen is a source of human face models for other programs. Users are able to generate face models either randomly or from input photographs.
FreeCAD is a full-featured CAD/CAE open source software. Python scripting and various plugin modules are supported, e.g. CAM, Robotics, Meshing and FEM.
form•Z (AutoDesSys, Inc.) is a general purpose solid/surface 3D modeler. Its primary use is for modeling, but it also features photo realistic rendering and object-centric animation support. form•Z is used in architecture, interior design, illustration, product design, and set design. It supports plug-ins and scripts, has import/export capabilities and was first released in 1991. It is currently available for both OS X and Windows.
G
Gelato is a hardware-accelerated, non-real-time renderer created by graphics card manufacturer NVIDIA.
GigaMesh Software Framework is a tool for analyzing high-resolution 3D-measurement data focused on applications in forensic sciences like archaeology. It is known for robust visualization and extraction of features like weathered inscriptions or faint fingerprints.
Gmax is a deprecated lightweight freeware edition of Autodesk 3ds Max originally developed for hobbyists and game content creation. It is discontinued in 2005 but , it is still available for download.
Grome is a 3D environment editor for terrain, water and vegetation suitable for games and other 3D real-time applications.
H
Hexagon (Daz Productions) is a 3D subdivision modeler with an emphasis on organic modeling.
Houdini (Side Effects Software) is used for visual effects and character animation. It was used in Disney's feature film The Wild. Houdini uses a non-standard interface that it refers to as a "NODE system". It has a hybrid micropolygon-raytracer renderer, Mantra, but it also has built-in support for commercial renderers like Pixar's RenderMan and Mental Ray.
I
iClone (Reallusion Inc.) is a stand-alone, real-time 3D animation software used in previsualization, video games, Machinima, and story-telling, that specializes in quick facial and body animation. iClone provides a global library of template-based content and a collection of animation tools including a motion capturing plug-in, and content exporting capabilities to other 3D applications and game engines.
Imagine (3D modeling software)
Indigo Renderer is an unbiased photorealistic renderer that uses XML for scene description. Exporters available for Blender, Maya (Mti), form•Z, Cinema4D, Rhino, 3ds Max.
Infini-D from Specular International -- 1996 Macworld “Best 3D Modeler/Renderer”
Inventor (Autodesk) The Autodesk Inventor is for 3D mechanical design, product simulation, tooling creation, and design communication.
K
Kerkythea is a freeware rendering system that supports raytracing. Currently, it can be integrated with 3ds Max, Blender, SketchUp, and Silo (generally any software that can export files in obj and 3ds formats). Kerkythea is a standalone renderer, using physically accurate materials and lighting.
KernelCAD is a large component aimed to present CAD as a GUI element for programming engineers. Includes interface to Open CASCADE
L
LightWave 3D (NewTek), first developed for the Amiga, was originally bundled as part of the Video Toaster package and entered the market as a low cost way for TV production companies to create quality CGI for their programming. It first gained public attention with its use in the TV series Babylon 5 and is used in several contemporary TV series. Lightwave is also used in a variety of modern film productions. It is available for both Windows and OS X.
LuxRender is an unbiased open source rendering engine featuring Metropolis light transport
M
MacPerspective was a 3D perspective drawing program developed for the Apple Macintosh computer in 1985.
MakeHuman is a GPL program that generates 3D parametric humanoids.
MASSIVE is a 3D animation system for generating crowd-related visual effects, targeted for use in film and television. Originally developed for controlling the large-scale CGI battles in The Lord of the Rings, Massive has become an industry standard for digital crowd control in high-end animation and has been used on several other big-budget films. It is available for various Unix and Linux platforms as well as Windows.
Maxwell Render is a multi-platform renderer which forgoes raytracing, global illumination and radiosity in favor of photon rendering with a virtual electromagnetic spectrum, resulting in very authentic looking renders. It was the first unbiased render to market.
Maya (Autodesk) is currently used in the film, television, and gaming industry. Maya has developed over the years into an application platform in and of itself through extendability via its MEL programming language. It is available for Windows, Linux and OS X.
mental ray is another popular renderer, and comes default with most of the high-end packages. (Now owned by NVIDIA)
MeshLab is a free Windows, Linux and OS X application for visualizing, simplifying, processing and converting large three-dimensional meshes to or from a variety of 3D file formats.
MetaCreations Detailer and Painter 3D are discontinued software applications specifically for painting texture maps on 3-D Models.
MicroStation is proprietary 2D / 3D modelling program with extensive import/export capabilities by Bentley Systems.
Milkshape 3D is a shareware/trialware polygon 3D modelling program with extensive import/export capabilities.
Modo (The Foundry) is a subdivision modeling, texturing and rendering tool with support for camera motion and morphs/blendshapes.and is now used in the Television Industry It is available for Windows, OS X and Linux (x86_64).
Mudbox is a high resolution brush-based 3D sculpting program, that claims to be the first of its type. The software was acquired by Autodesk in 2007, and has a current rival in its field known as ZBrush (see below).
N
NX (Siemens PLM Software) is an integrated suite of software for computer-aided mechanical design (mechanical CAM), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) formed by combining the former Unigraphics and SDRC I-deas software product lines. NX is currently available for the following operating systems: Windows XP and Vista, Apple Mac OS X, and Novell SUSE Linux.
O
Octane Render is an unbiased GPU-accelerated renderer based on Nvidia CUDA.
OpenFX is a modelling and animation studio, distributed under the GPL.
P
ParaView is an open source 3D scientific visualization platform
Photoshop (Adobe Systems) can import models from programs such as zbrush and 3ds max, it allows you to add complex textures
Pixie is an open source photorealistic renderer.
Poser (Smith Micro) Poser is a 3D rendering and animation computer program optimized for models that depict the human figure in three-dimensional form and is specialized for adjusting features of preexisting character models via varying parameters. It is also for posing and rendering of models and characters. It includes some specialized tools for walk cycle creation, cloth and hair.
POV-Ray (or The Persistence of Vision Raytracer) is a freeware (with source) ray tracer written for multiple platforms.
PowerAnimator (Alias Systems Corporation) was a high-end 3D package in the 1990s, running on Silicon Graphics (SGI) workstations. Alias took code from PowerAnimator, TDI Explore and Wavefront to build Maya.
Presto is a proprietary software developed and used in-house by Pixar Animation Studios in the animation of their feature and short films. Presto is not available for sale and therefore is only used by Pixar.
R
RenderMan (Pixar) is a renderer, used in many studios. Animation packages such as 3DS Max and Maya can pipeline to RenderMan to do all the rendering.
RealFlow simulates and renders particle systems of rigid bodies and fluids.
Realsoft 3D Full featured 3D modeling, animation, simulation and rendering software available for Windows, Linux, Mac OS X and Irix.
Remo 3D is a commercial 3D modeling tool specialized in creating 3D models for realtime visualization, available for Windows and Linux.
Rhinoceros 3D is a commercial modeling tool which has excellent support for freeform NURBS editing.
S
Sculptris is a program made by Pixologic it’s a free simple to use program, it is essentially a beginners version of Zbrush
Sculpt 3D raytrace application released in 1987 for Amiga computers programmed by Eric Graham.
Shade 3D is a commercial modeling/rendering/animation tool from Japan with import/export format support for Adobe, Social Worlds, and QuickTime among others.
Seamless3d is a NURBS based modelling and animation software with much of the focus on creating avatars optimized for real time animation. It is free, open source under the MIT license.
Showplace is a 3D computer graphics program which was released in the 1990s by Pixar with versions for Apple Macintosh and Microsoft Windows.
Silo (Nevercenter) is a subdivision-surface modeler available for OS X and Windows. Silo does not include a renderer. Silo is the bundled in modeler for the Electric Image Animation System suite.
Simplygon is a commercial mesh processing package for remeshing general input meshes into real-time renderable meshes.
SketchUp Free (Trimble) is a 3D modeling package that features a sketch-based modelling approach integrated with Google Earth and limits export to Google's "3D Warehouse", where users can share their content. It has a pro version which supports 2D and 3D model export functions among other features.
SquidNet-NDP is a commercially available product for rendering 3D animations within a distributed network environment.
Softimage (Autodesk) (formerly Softimage|XSI) is a 3D modeling and animation package that integrates with mental ray rendering. It is feature-similar to Maya and 3ds Max and is used in the production of professional films, commercials, video games, and other media.
Solid Edge ( Siemens PLM Software) is a commercial application for design, drafting, analysis, and simulation of products, systems, machines and tools. All versions include feature-based parametric modeling, assembly modeling, drafting, sheetmetal, weldment, freeform surface design, and data management. Application-programming interfaces enable scripting in Visual Basic and C programming.
solidThinking (solidThinking) is a 3D solid/surface modeling and rendering suite which features a construction tree method of development. The tree is the "history" of the model construction process and allows real-time updates when modifications are made to points, curves, parameters or entire objects.
SolidWorks (SolidWorks Corporation) is an application used for the design, detailing and validation of products, systems, machines and toolings. All versions include modeling, assemblies, drawing, sheetmetal, weldment, and freeform surfacing functionality. It also has support for scripting in Visual Basic and C.
SolveSpace is a free libre and open source 2D and 3D CAD (computer-aided design) program. It is a constraint-based parametric modeler with simple mechanical simulation capabilities. Version 2.1 onward runs on Windows, Linux and macOS. It is developed by Jonathan Westhues and a community of volunteers.
Spore (Maxis) is a game allowing users to design their own fully functioning creatures with a very rudimentary, easy-to-use interface. The game includes a COLLADA exporter, so models can be downloaded and imported into any other 3D software listed here that supports the COLLADA format. Models can also be directly imported into game development software such as Unity (game engine).
Sunflow is an open source, photo-realistic renderer written in Java.
Swift 3D (Electric Rain) is a relatively inexpensive 3D design, modeling, and animation application targeted to entry-level 3D users and Adobe Flash designers. Swift 3D supports vector and raster-based 3D animations for Adobe Flash and Microsoft Silverlight XAML.
T
Typestry (Pixar) is an abandonware 3D computer program released in the 1990s by Pixar for Apple Macintosh and DOS-based PC computer systems. It rendered and animated text in 3d in various fonts based on the user's input.
Terragen and Terragen 2 are scenery generators.
The Advanced Visualizer, a.k.a. TAV (Wavefront Technologies) was a high-end 3D package between the late 1980s and mid-1990s, running on Silicon Graphics (SGI) workstations. Wavefront first acquired TDI in 1993, before Wavefront itself was acquired in 1995 along with Alias by SGI to form Alias|Wavefront.
trueSpace (Caligari Corporation) is a discontinued 3D program for Windows, although the company Caligari first found its start on the Amiga platform. trueSpace features modelling, animation, 3D-painting, and rendering capabilities. In 2009, Microsoft purchased TrueSpace and it is now available completely free of charge.
V
VRay is promoted for use in the architectural visualization field used in conjunction with 3ds max and 3ds viz. It is also commonly used with Maya and Rhino.
Vue is a tool for creating, animating and rendering natural 3D environments. It was most recently used to create the background jungle environments in the 2nd and 3rd Pirates of the Caribbean films.
W
Wings 3D is a BSD-licensed, subdivision modeller.
Y
YafRay is a raytracer/renderer distributed under the LGPL. This project is no longer being actively developed.
YafaRay YafRay's successor, a raytracer/renderer distributed under the LGPL.
Z
ZBrush (Pixologic) is a digital sculpting and animation tool that combines 3D/2.5D modeling, texturing and painting. It is available for OS X and Windows. It is used to create normal maps for low resolution models to make them look more detailed.
See also
Comparison of 3D computer graphics software
List of 3D animation software
List of 3D modeling software
References
3D graphics software
3D imaging
3D Graphics Software
Visual effects software |
49998667 | https://en.wikipedia.org/wiki/Windows%20Subsystem%20for%20Linux | Windows Subsystem for Linux | Windows Subsystem for Linux (WSL) is a compatibility layer for running Linux binary executables (in ELF format) natively on Windows 10, Windows 11, and Windows Server 2019.
In May 2019, WSL 2 was announced, introducing important changes such as a real Linux kernel, through a subset of Hyper-V features. Since June 2019, WSL 2 is available to Windows 10 customers through the Windows Insider program, including the Home edition. WSL is not available to all Windows 10 users by default. It can be installed either by joining the Windows Insider program or manual install.
History
Microsoft's first foray into achieving Unix-like compatibility on Windows began with the Microsoft POSIX Subsystem, superseded by Windows Services for UNIX via MKS/Interix, which was eventually deprecated with the release of Windows 8.1. The technology behind Windows Subsystem for Linux originated in the unreleased Project Astoria, which enabled some Android applications to run on Windows 10 Mobile. It was first made available in Windows 10 Insider Preview build 14316.
Whereas Microsoft's previous projects and the third-party Cygwin had focused on creating their own unique Unix-like environments based on the POSIX standard, WSL aims for native Linux compatibility. Instead of wrapping non-native functionality into Win32 system calls as Cygwin did, WSL's initial design (WSL 1) leveraged the NT kernel executive to serve Linux programs as special, isolated minimal processes (known as "pico processes") attached to kernel mode "pico providers" as dedicated system call and exception handlers distinct from that of a vanilla NT process, opting to reutilize existing NT implementations wherever possible.
WSL beta was introduced in Windows 10 version 1607 (Anniversary Update) on August 2, 2016. Only Ubuntu (with Bash as the default shell) was supported. WSL beta was also called "Bash on Ubuntu on Windows" or "Bash on Windows". WSL was no longer beta in Windows 10 version 1709 (Fall Creators Update), released on October 17, 2017. Multiple Linux distributions could be installed and were available for install in the Windows Store.
In 2017 Richard Stallman expressed fears that integrating Linux functionality into Windows will only hinder the development of free software, calling efforts like WSL "a step backward in the campaign for freedom."
Though WSL (via this initial design) was much faster and arguably much more popular than its brethren UNIX-on-Windows projects, Windows kernel engineers found difficulty in trying to increase WSL's performance and syscall compatibility by trying to reshape the existing NT kernel to recognize and operate correctly on Linux's API. At a Microsoft Ignite conference in 2018, Microsoft engineers gave a high-level overview of a new "lightweight" Hyper-V VM technology for containerization where a virtualized kernel could make direct use of NT primitives on the host. In 2019, Microsoft announced a completely redesigned WSL architecture (WSL 2) using this lightweight VM technology hosting actual (customized) Linux kernel images, claiming full syscall compatibility. Microsoft announced WSL 2 on May 6, 2019, and it was shipped with Windows 10 version 2004. It was also backported to Windows 10 version 1903 and 1909.
GPU support for WSL 2 was introduced in Windows build 20150. GUI support for WSL 2 was introduced in Windows build 21364. Both of them are shipped in Windows 11.
In April 2021, Microsoft released a Windows 10 test build that also includes the ability to run Linux graphical user interface (GUI) apps using WSL 2 and CBL-Mariner. The Windows Subsystem for Linux GUI (WSLg) was officially released at the Microsoft Build 2021 conference. It is included in Windows 10 Insider build 21364 or later.
Microsoft introduced a Windows Store version of WSL on October 11, 2021 for Windows 11.
Features
WSL is available in Windows Server 2019 and in versions of Windows 10 from version 1607, though only in 64-bit versions.
Microsoft envisages WSL as "primarily a tool for developers – especially web developers and those who work on or with open source projects". In September 2018, Microsoft said that "WSL requires fewer resources (CPU, memory, and storage) than a full virtual machine" (which prior to WSL was the most direct way to run Linux software in a Windows environment), while also allowing users to use Windows apps and Linux tools on the same set of files.
The first release of WSL provides a Linux-compatible kernel interface developed by Microsoft, containing no Linux kernel code, which can then run the user space of a Linux distribution on top of it, such as Ubuntu, openSUSE, SUSE Linux Enterprise Server, Debian and Kali Linux. Such a user space might contain a GNU Bash shell and command language, with native GNU command-line tools (sed, awk, etc.), programming-language interpreters (Ruby, Python, etc.), and even graphical applications (using an X11 server at the host side).
The architecture was redesigned in WSL 2, with a Linux kernel running in a lightweight virtual machine environment.
wsl.exe
The wsl.exe command is used to manage distributions in the Windows Subsystem for Linux on the command-line. It can list available distributions, set a default distribution, and uninstall distributions. The command can also be used to run Linux binaries from the Windows Command Prompt or Windows PowerShell. wsl.exe replaces lxrun.exe which is deprecated as of Windows 10 1803 and later.
WSLg
WSLg is short for Windows Subsystem for Linux GUI built with the purpose of enabling support for running Linux GUI applications (X11 and Wayland) on Windows in a fully integrated desktop experience. WSLg was officially released at the Microsoft Build 2021 conference. It is included in Windows 10 Insider build 21364 or later. However, with the introduction of Windows 11 WSLg is finally shipping with a production build of Windows bringing support for both graphics and audio in WSL apps.
Prerequisites for running WSLg include:
Windows 11 (build 22000.*) or Windows 11 Insider Preview (builds 21362+)
A system with virtual GPU (vGPU) enabled for WSL is recommended, as it will allow you to benefit from hardware accelerated OpenGL rendering
Design
WSL 1
LXSS Manager Service is the service in charge of interacting with the subsystem (through the drivers and ), and the way that Bash.exe (not to be confused with the Shells provided by the Linux distributions) launches the Linux processes, as well as handling the Linux system calls and the binary locks during their execution. All Linux processes invoked by a particular user go into a "Linux Instance" (usually, the first invoked process is init). Once all the applications are closed, the instance is closed.
WSL 1's design featured no hardware emulation / virtualization (unlike other projects such as coLinux) and makes direct use of the host file system (through and ) and some parts of the hardware, such as the network, which guarantees interoperability. Web servers for example, can be accessed through the same interfaces and IP addresses configured on the host, and shares the same restrictions on the use of ports that require administrative permissions, or ports already occupied by other applications. There are certain locations (such as system folders) and configurations whose access/modification is restricted, even when running as root, with sudo from the shell. An instance with elevated privileges must be launched in order to get "sudo" to give real root privileges, and allow such access.
WSL 1 is not capable of running all Linux software, such as 32-bit binaries, or those that require specific Linux kernel services not implemented in WSL. Due to a lack of any "real" Linux kernel in WSL 1, kernel modules, such as device drivers, cannot be run. WSL 2, however, makes use of live virtualized Linux kernel instances. It is possible to run some graphical (GUI) applications (such as Mozilla Firefox) by installing an X11 server within the Windows (host) environment (such as VcXsrv or Xming), although not without caveats, such as the lack of audio support (though this can be remedied by installing PulseAudio in Windows in a similar manner to X11) or hardware acceleration (resulting in poor graphics performance). Support for OpenCL and CUDA is also not being implemented currently, although planned for future releases. Microsoft stated WSL was designed for the development of applications, and not for desktop computers or production servers, recommending the use of virtual machines (Hyper-V), Kubernetes, and Azure for those purposes.
In benchmarks WSL 1's performance is often near native Linux Ubuntu, Debian, Intel Clear Linux or other Linux distributions. I/O is in some tests a bottleneck for WSL. The redesigned WSL 2 backend is claimed by Microsoft to offer twenty-fold increases in speed on certain operations compared to that of WSL 1. In June 2020, a benchmark with 173 tests with an AMD Threadripper 3970x shows good performance with WSL 2 (20H2) with 87% of the performance of native Ubuntu 20.04.0 LTS. This is an improvement over WSL 1, which has only 70% of the performance of native Ubuntu in this comparison. WSL 2 improves I/O performance, providing a near-native level.
A comparison of 69 tests with Intel i9 10900K in May 2020 shows nearly the same relative performance. In December 2020, a benchmark with 43 tests with an AMD Ryzen 5900X shows good performance with WSL 2 (20H2) with 93% of the performance of native 20.04.1 LTS. This is an improvement over WSL 1, which has only 73% in this comparison.
WSL 2
Version 2 introduces changes in the architecture. Microsoft has opted for virtualization through a highly optimized subset of Hyper-V features, in order to run the kernel and distributions (based upon the kernel), promising performance equivalent to WSL 1. For backward compatibility, developers don't need to change anything in their published distributions. WSL 2 settings can be tweaked by the WSL global configuration, contained in an INI file named .wslconfig in the User Profile folder.
The distribution installation resides inside an ext4-formatted filesystem inside a virtual disk, and the host file system is transparently accessible through the 9P protocol, similarly to other virtual machine technologies like QEMU. For the users, Microsoft promised up to 20 times the read/write performance of WSL 1. From Windows an IFS network redirector is provided for Linux guest file access using the UNC path prefix of \\wsl$.
WSL 2 requires Windows 10 version 1903 or higher, with Build 18362 or higher, for x64 systems, and Version 2004 or higher, with Build 19041 or higher, for ARM64 systems.
See also
Azure Sphere
FreeBSD#OS compatibility layers
SmartOS#SmartOS types of zones
Terminal (Windows)
Xenix
References
External links
WSL on Microsoft Docs
Compatibility layers
Windows 10
Windows components
Virtualization software |
38615470 | https://en.wikipedia.org/wiki/List%20of%20Person%20of%20Interest%20characters | List of Person of Interest characters | This is a list of characters in the American science fiction crime drama television series Person of Interest.
Main characters
John Reese
John Reese (played by Jim Caviezel) is the name adopted by the former U.S. Army Special Forces soldier and Central Intelligence Agency (CIA) officer who serves as Finch's armed enforcer used to stop future crimes. Presumed dead following a failed CIA operation in China, Reese's real name is unknown. He was nicknamed "The Man in the Suit" by law enforcement, who know him only in terms of the description given by witnesses.
Harold Finch
Harold Finch (played by Michael Emerson) is a reclusive billionaire software engineer who built a machine that predicts future crimes and outputs either the victim's or perpetrator's Social Security number. He is known by a series of bird-themed aliases such as Harold Wren, Harold Crow, and Harold Swift. Finch is very secretive and highly conscious of digital security, and has successfully erased his own digital footprint. A frequent companion of Finch is a dog named Bear (played by Graubaer's Boker). Bear is a Belgian Malinois with military training who Reese rescues from Aryan Nationalists, who were using him as an attack dog.
Sameen Shaw
Sameen Shaw (played by Sarah Shahi) is a former U.S. Marine and a US Army ISA operative, first seen as an assassin working for "The Program", the section of the Government dealing with the "relevant" numbers found by The Machine (S2 Ep16, "Relevance"). Her ISA partner Cole begins to have doubts after one of their targets proves to be an engineer working for the government; the assassination is part of the cover-up that conceals the existence of The Machine. As a result, her employers targets her and her partner and plans a trap to assassinate them. Shaw and Cole's numbers were given to Reese and Finch. That night, Cole is killed. Reese and Finch continues to try helping her and although she initially declines a couple of times, she ends up subsequently working with them on a regular basis. She likes dogs and buys Bear an expensive collar and other presents.
Shaw witnessed her father's death in an automobile accident at a young age but did not exhibit typical emotional reactions to it. In her first appearance, she claims that she has an Axis II (psychology)Personality disorder and alexithymia, making her unable to feel and/or express common human emotions like fear or sadness. Shaw attended Med School and once trained as a surgeon; although very technically capable, she was criticized by her superiors for her indifference and lack of sensitivity to her patients and it is implied that she was removed from the program before she could complete her surgical training because of the lack of these emotional characteristics. She is capable of deducing emotionally correct actions, such as rescuing Fusco's son rather than Fusco (S3 Ep9, "The Crossing") and taking on Genrika as her unofficial ward (S3 Ep5, "Razgovor"). She unexpectedly reveals a hint of emotion by passionately kissing Root before sacrificing herself to save the team (S4 Ep11, "If-Then-Else").
She is later revealed to be alive and is used by Samaritan to bait a trap for the team. ("Asylum") Greer then has Shaw subjected to over 7,000 simulations in an attempt to turn her on the team but Shaw is able to escape, killing Jeremy Lambert in the process. ("Reassortment") Shaw returns to New York a week later, but because of the amount of simulations she went through, has a hard time distinguishing reality from simulation. Root is able to get her to rejoin the team, however, which now includes a fully knowledgeable Fusco. ("Sotto Voce") Following Finch's cover being blown, Shaw works with Root to protect him but is separated by a gunfight. She is later devastated in her own way to learn of Root's death. ("The Day the World Went Away") Following Root's death, the Machine reassigns Root's rotating identities to Shaw and she works with Reese and Fusco to stop a Presidential assassination by domestic terrorists. ("Synecdoche") She then infiltrates Fort Meade with Reese to back up Finch in deploying the ICE-9 virus into Samaritan. (".exe")
As the world falls apart from the ICE-9 virus, Shaw visits Root's grave in an attempt to say goodbye, but is unable to. She is then contacted by the Machine who uses Root's voice and She aids her in escaping a Samaritan ambush. During the final battle with Samaritan, Shaw works with Fusco to defend the Machine, capturing Samaritan agent Jeffrey Blackwell in the process. Shaw realizes Blackwell is Root's killer and shows her cold rage but she first decides to listen to the Machine. Before Shaw and Fusco leave the Machine per Her request, the Machine passes on a final message from Root to Shaw who is able to finally say goodbye to Root. A few moments later Blackwell escapes after seriously wounding Fusco.
A week later, Shaw hunts down and kills Blackwell in revenge for Root. She then meets a recovering Fusco in a café, neither of them knowing if Reese or Finch survived. Shaw then takes Bear and walks down the street before being contacted by the Machine's copy, still using Root's voice, and smiles at a near by camera. ("return 0") It appears that Shaw, chosen again as the Machine's Primary Asset and now also somewhat of an Analog Interface, will continue working numbers with the Machine.
Root
Root (played by Amy Acker), born Samantha "Sam" Groves, is a highly intelligent computer hacker and contract killer, obsessed with Finch and the Machine. Aside from her extraordinary intelligence, she has also displayed an uncanny ability in the use of firearms, a very high threshold for pain (S2 Ep2, "Bad Code", where she gets physically assaulted by Denton Weeks; S3 Ep12, "Aletheia", where she is severely tortured but not broken by Control) and a proficiency for social engineering. She has successfully posed as a high-end psychiatrist (S1 Ep23, "Firewall"), an assistant to the United States Special Counsel (S2 Ep15, "Booked Solid") and as a legitimate FBI agent (showing up in Lionel Fusco's New York City Police Department 8th Precinct with a legitimate warrant and an FBI badge, S3 Ep17, "Root Path").
She was raised in Bishop, Texas, where she lived until her mother's death. According to Sam, in discussions with Finch, her mother told her to "follow her talents" — and she was good at inflicting harm on others without guilt or remorse. Also, as a child, computers made more sense to Sam than people, who she sees as "bad code"; admitting she has been waiting for someone who shared her understanding of technology her whole life.
In 1991, Sam and her friend, Hanna Frey, were playing computer games in the local library until it closed for the day. After her friend left, Sam stayed a little longer successfully finishing The Oregon Trail. When she was ready to leave the library, she saw Hanna getting into a car that belonged to Trent Russell, a local man who was a member of the book club. She recognized his car and told the librarian, Barbara, (played by Margo Martindale), who later married Trent Russell, what she had seen. However, as Barbara was in love with Trent, she kept the information to herself to protect him. Two years after Hanna disappeared, Sam hacked into the bank account of a drug lord, and stole $100,000, using the money to frame Trent for the robbery, setting him up to be killed, as revenge for killing Hanna. Sam later moved out of Texas and became a professional assassin and ruthless hacker-for-hire under the alias "Root". Although she doesn't necessarily enjoy it, she has no qualms about torturing or murdering people to get whatever she wants.
She was hired by Pete Matheson to assassinate Congressman Michael Delancey. Through the use of a patsy, Scott Powell, Root was successful. However, her plan to kill Powell afterwards was foiled by John Reese. Ultimately, the operation fell apart as Reese took out one of her men and stole his phone while Finch provided a phone recording to the FBI proving Matheson's involvement. Realizing this, Root killed her client and staged it as a suicide out of guilt for his actions, exonerating Powell. Root cleared out of her location, the dorm room of an uninvolved college student away on winter break, before the FBI, acting on an "anonymous" tip (from Finch) could apprehend her. She had a brief instant message conversation with Finch, acknowledging him as a worthy opponent and said she was looking forward to the next time. She ended the conversation with "Harold", letting him know that she knew who he was.
As part of Root's plan to come face to face with Finch, Root anonymously contacted HR and put a hit on herself, using the pseudonym "Dr. Caroline Turing". Turing's Social Security number appeared through the Machine, making her a person of interest. Reese, discovering Turing was a psychologist, became one of her patients. Later that night, Turing was attacked by four assassins hired by Simmons. Reese managed to get her to safety in a hotel for the night, and the two were unknowingly caught on the FBI's radar. Under Donnelley's supervision, the FBI staged a full-scale assault on the hotel, while HR was trying to get to Turing. With the help of Joss Carter and Finch, Turing was led to a tunnel system under the hotel. Reese instructed her to find Finch at the end of the tunnel, while he was holding off the HR hit men. As Finch was confronted by Alicia Corwin regarding his involvement with the Machine, Root shot Corwin dead. Introducing herself as Root, she tells Harold they have a lot to talk about. Finch drove away at gunpoint, with Corwin's body left behind.
Planning to escape with Finch to Texas, Root stopped in Maryland where she gained access to Denton Weeks' house. She lured him there to overpower and question him about the whereabouts of the Machine. After he told her what she wanted to know, she shot him dead. She did not know that Reese had already tracked her down through her friend Hanna's case. While holding him captive, Root explained to Finch that she does not want to control the Machine but to "set it free" to usher in a posthuman future. At the train station, Reese managed to save Finch and Root escaped. She called him on his mobile phone later that day to thank him for solving Hanna's case and told him that she would be in touch.
Several months later, Root, to gain information, began working, as "Miss May", an assistant to the Special Counsel at the Office of Special Counsel. She continues to find information on the Machine's location and learns about the virus and "reset". She beats Decima to it and receives admin access of the Machine for 24 hours. However, by the time Root, Finch, Reese, and Shaw get to the place where the Machine was stored, She had already moved herself to unknown locations. Root, losing her purpose, is devastated and sent to a Stoneridge Hospital by Finch. While being kept there, the Machine contacts her and Root becomes chosen as the Machine's "Analog Interface". And after going through Machine's reformation project, she starts to work for the Machine and often times collaborates with Reese, Shaw, and Finch. She learns and changes her "bad code" ideology throughout S3 and in Root Path. ("/", 3x17) Throughout S3, she begins to fall in love with Shaw, who unknowingly becomes a part of the reason for Root's reformation. By the end of S3, she is technically full member of the team.
In Season 4, Root continues to work on little projects with the Machine, preparing for the big war with Samaritan. She continues to make attempts to convince Finch to be more of an active player against Samaritan. In "If-Then-Else," she loses Shaw who sacrifices herself to save her and the team. Root then goes on a warpath in search of Shaw for the rest of S4, partially going back to the version of herself before reformation. When the Machine tells Root to "stop" searching for Sameen, she shows disappointment and sadness, leaving the team with a goodbye. She comes back a couple months later and again helps the Machine preparing for the war. She learns that Shaw is still alive and is being kept somewhere in Samaritan's hands. She infiltrates Samaritan's base knowing it's a trap and as a result she is captured. The Machine releases her and Finch by making a deal with Samaritan: giving up the location of Herself. Root then works with Finch and Reese to protect the Machine from being killed by Samaritan and successfully migrates a copy of the Machine in a laptop and custom made suitcase Root prepared in advance.
Once the Machine is successfully decompressed, she and Finch work on debugging the Machine for 2 months. Once it's up and running an open system, the first thing she does is to search for Shaw but is disappointed to find that there were 0 search results. She and Finch get a copy of Samaritan's code and create a mini-clone of Samaritan to run simulations of the war. In "QSO", she finally is able to get the Machine's help to send a message to Shaw via a secret communication channel that is connected to all Samaritan facilities. Root's message, "4A", stops Shaw from attempting to kill herself and leads her to re-plan an escape. Root finally reunites with Shaw after 11 months of separation in "Sotto Voce". However, in just a week after reunion, Root dies trying to protect Finch from being assassinated. The Machine then chooses Root's voice to talk to Finch and Shaw. The Machine later shows that She not only chose Root's voice but also as Her avatar/proxy in "Return 0". Root was chosen, again by the Machine and continues to "live as long as the Machine lives."
Lionel Fusco
Lionel P. Fusco (played by Kevin Chapman) is a homicide detective who was previously working in the NYPD 51st Precinct Vice Squad. He has an ex-wife and a young son, Lee Fusco. Fusco was a member of the criminal organization HR, although he did it for personal loyalty rather than greed. John Reese gave him a chance for redemption. At first, John had to blackmail him with the death of a murdered fellow officer, but later he arranged for him to be transferred to the 8th Precinct Homicide Task Force where he worked with Carter. Gradually during the first season, Fusco became a loyal friend. Until the finale of the first season, he and Carter are unaware that they have both been helping Finch and Reese. Fusco remained unaware of the Machine's existence until "Sotto Voce". He calls everyone by nickname: "Wonder Boy", "Glasses", "Looney Tunes", and "Cocoa Puffs" are nicknames he uses for Reese, Finch, Shaw, and Root, respectively.
Joss Carter
Jocelyn "Joss" Carter (played by Taraji P. Henson) is a detective in the New York City Police Department (NYPD) 8th Precinct Homicide Task Force. She is a single mother of one son, Taylor, whom she is close to. Formerly a Warrant Officer and an interrogator in the U.S. Army, she did two tours of duty, one in Iraq and one in Afghanistan. She passed the bar examination in 2004, but gave up practicing law to become a police officer. In 2005, she had graduated from the police academy and started working in the NYPD as a patrol officer. In 2008 she was promoted to a detective and since 2010 she has worked in the Homicide Task Force at the 8th Precinct. Carter crossed paths with homeless John Reese ("Pilot") following his encounter with a group of young men on a New York City subway, but she later reconnects with him as the mysterious "Man in the Suit". Carter is initially determined to apprehend Reese as a person of interest (a person not formally accused of a crime) even after Reese saves her and her son on separate occasions. However, when the CIA tries to assassinate Reese without due process, she gradually revises her views and starts helping Reese and Finch instead.
For nine episodes in season 3, Carter is demoted to a beat officer after the crime organization HR frames her for shooting an unarmed suspect at the end of season 2. This incident, and the death of two of her fellow detectives – Bill Szymanski and Cal Beecher – motivate her to go against HR and arrest its mastermind, Alonzo Quinn, without resorting to the vigilante justice of Reese and Finch. By doing so, she restores her status as a detective. Tragically, soon afterwards, she is killed by Patrick Simmons, the last surviving member of HR, who was trying to ambush Reese. Her death sends Reese and Shaw on a bloody vengeance spree. During most of her screen time, she is unaware of the Machine's existence and does not know where the tips come from. Only after Carter's status as a detective is restored does she conclude that Finch must be receiving his info from a computer.
Recurring characters
Friends & Family of Team Machine
The following is a list of friends and relatives of Team Machine.
Jessica Arndt
Jessica Arndt (played by Susan Misner) was John Reese's ex-girlfriend and Peter Arndt's wife.
Jessica dated John Reese for approximately six months prior to September 2001 ("Pilot"). At this time, she was living in Tacoma, Washington ("Many Happy Returns"). While on a vacation in Mexico on September 11, 2001, Jessica jokingly asked Reese to quit the United States Army, unaware that he already had. Moments later, she witnessed the news coverage about the attacks on the World Trade Center ("Pilot").
Years later, Jessica ran into Reese at an airport, and he told her that he had found a new job. She told him that she was engaged to a man named Peter and was moving back East, though she would wait for him if he told her to. Without a response, Reese turned and left. ("Mission Creep"). Later on, while operating illegally in New York, Reese met with Peter, who was now married to Jessica, at a bar. As Jessica arrived, Kara convinced Reese to leave just before Peter could "introduce" her to him ("Blue Code").
Reese and Stanton are in Morocco interrogating a suspect. While Stanton is busy with the suspect, Reese receives a voicemail from Jessica, who sounds distraught. Reese calls Jessica back and Reese can tell something is wrong, something she is not telling him. He informs her that he will be in New York to see her the next day. However, Reese is denied leave, as he and Stanton are commissioned on a special assignment to China by Agent Snow.
Later that month, Peter assaulted and accidentally killed Jessica by bashing her head against their kitchen counter. He originally intended to call an ambulance, but later decided to cover up her death by staging a car accident.
After escaping from a trap set by his CIA handlers in China, Reese returned to the US and looked for Jessica. He went to the hospital she had worked at in New Rochelle to inquire about her, but discovered she had died two months prior. Without knowing it, he encountered a wheelchair-bound Finch in the hospital after learning the news. The Machine had been giving Finch Jessica's number over the years, but he had originally been unable to ascertain why it kept appearing. He eventually discovered that it was because she was living with someone who was an imminent threat to her. Jessica's death had a significant impact on Reese, leading him to alcoholism and contemplating suicide before his meeting with Finch ("Pilot").
Carter began investigating Jessica's death after the FBI invited her to help investigate Jessica's husband, whom they believed was murdered by Reese for owing money to a loan shark. Although the official autopsy report suggested Jessica's death was caused by a car accident, Carter's investigation leads her to discover Jessica was abused by her husband and was killed prior to the "accident". She also discover's Jessica's past relationship with Reese and destroys the evidence to keep the FBI from finding out.
Grace Hendricks
Grace R. Hendricks (played by Carrie Preston) was Finch's former fiancée who believed him to be dead.
Grace was born on April 12, 1969, in Columbia, South Carolina. In 1987, she enrolled in Rhode Island School of Design and graduated four years later with a Bachelor of Fine Arts, followed by a Master of Fine Arts from Yale University in 1994 ("The High Road"). She spent her college junior year in Venice, Italy ("Til Death").
While demonstrating the progress of the Machine to Nathan Ingram, Harold Finch was directed by the computer to Grace, who was painting in a park. Finch assumed the Machine's direction of him to her was a glitch ("The High Road").
After seemingly correcting the glitch, and returning to the park on another day while she was painting, the Machine directed Finch to Grace again. Harold discovered that, because he had asked the Machine to direct him to people who had characteristics one would not expect, it had directed him to Grace because she was the only person in the park who had no "dark secrets", plus with her interest for Charles Dickens ("The High Road").
In January, Grace was painting in the park again when Harold approached her eating an ice cream cone. He asked her if she wanted one, too ("The High Road"). They started dating soon after. On her birthday in April, Finch sent her on a scavenger hunt taking her to the Guggenheim Museum where he surprised her with her favorite painting ("Til Death").
In 2010, Finch proposed to Grace. He later tells Ingram about it, as well ("Zero Day"). After the bombing that killed Ingram and injured Finch, Grace comes to the hospital looking for him. Finch realizes that if the government finds out that Finch is alive, they will kill him and Grace, as well. Concerned for Grace's safety, Finch pretends that he is dead and leaves the hospital ("God Mode").
Grace is working as a cover illustrator for The Boroughs magazine. She keeps a photograph of herself and Harold in her living room unaware that he is watching her from afar in order to protect her from the people protecting the machine. It is implied that, even though her work as a non-digital illustrator is falling out of practice in modern culture, Harold has been assuring that she will always have work through his many business connections ("No Good Deed").
In 2013, Grace is chased by Decima Technologies in order to get at Finch. She is eventually captured and traded for Finch. During the trade, Grace stumbles and is caught by Finch, but because she's blindfolded, remains unaware of just who helped her. She later leaves for a new life in Italy arranged by Finch. ("Beta")
In 2016, the Machine simulated that in a world where it never existed, Finch would've never met Grace. (".exe")
In 2016, after Samaritan is destroyed, Finch tracks Grace down in Italy where she is painting in a public location. Grace is shocked to see Finch but visibly pleased. ("return 0")
Nathan Ingram
Nathan C. Ingram (played by Brett Cullen) is Harold Finch's deceased collaborator and business partner who co-created the Machine, and died under circumstances not known until "God Mode". Ingram is the founder of IFT and acted as the interface between the government and their company while the Machine was under development.
Ingram was born on June 16, 1962, in Freeport, Texas. He enrolled at the Massachusetts Institute of Technology (MIT) in 1980 and left with an incomplete Bachelor of Science degree in computer science in 1983 ("The High Road").
He founded his company in 1983 ("The High Road") along with his best friend and MIT classmate Harold Finch (known to him as Harold Wren), who disguised himself as an ordinary employee ("Ghosts").
On September 11, Ingram visits Finch while he is working on a new program, in order to break the tragic news of the terrorist attack on the World Trade Center earlier that day. As they both watch the news in shock, Ingram tells Finch that while they have changed as they aged, the world has not until this moment. Ingram proposes that they try to change the world for the better ("One Percent").
Ingram at some point made contact with the U.S. Government to build the Machine. Ingram acted as the public face of IFT and was the man with whom the government dealt, keeping up the facade that he alone was building the Machine, while in fact, Finch covertly began building it.
On June 10, 2002, following another ceremony, and after dismissing half of his company's staff, Ingram arrived at an abandoned office floor in the building, and discovered Finch's progress with the Machine. Supplied with the feeds from the NSA, the Machine could track and listen to all the citizens of New York. Finch told Ingram the next step would be teaching the Machine to pick out the terrorists from the general population ("Ghosts").
Ingram met with Alicia Corwin, a government worker who supplied the Machine's various feeds. Corwin requested information regarding the progress of the Machine, and Ingram handed her the Social Security number of a DIA agent. The agent was later found to have been in collusion with the Iranian government to sell weapons-grade uranium. After meeting with Denton Weeks to discuss the fact that the Machine could not track specific people, Weeks threatened to reduce the payment for the Machine. He was dismayed to find that Ingram was providing it for a minuscule one US dollar. Ingram later celebrated with Finch about the Machine's success in discovering a traitor and got a closer look at how the Machine operated. It was at this time that the Machine considered Ingram as the subject of a potential threat ("Super").
Ingram learns of the Machine's programming to ignore any "irrelevant" crimes; any attacks of a smaller scale than those that present a threat to the nation. Every night at midnight, the Machine would erase the list of potential crimes. Clearly disturbed by this piece of information, Ingram stood by as Finch stated that the Machine was not built to save "someone", but rather "everyone" ("Ghosts").
Ingram came to realize that the people who were to receive the Machine were not entirely trustworthy. During a meeting with Alicia Corwin, he accidentally let slip that eight people knew about the Machine rather than seven. Later, he tried to convince Finch to make a back door into the machine, only for Finch to refuse. Just prior to shipping the Machine, Ingram used his administrative access to install a new function named "contingency". After creating the back door, the Machine was relocated to its new home ("No Good Deed").
After the Machine is shipped out on a freight train, Ingram meets with Finch in their now empty laboratory. Finch and Ingram get into an argument that the Machine does not save enough lives, and Finch responds that the Machine is gone. Ingram implies he built a back door into it, which angers Finch as they had agreed not to. He tells Ingram that they agreed not to play God, but Ingram reveals that he is not proud of that agreement. Finch argues that they did what they set out to do, and that "either we move onto the next thing together, or we don't" ("One Percent").
Ingram leaves the meeting and the Machine sends him the number of a woman named Anna Sanders. Ingram researches and follows her, discovering that she has a restraining order out against her ex-boyfriend because of domestic violence. While watching her from his car one night, he sees a suspicious looking man with a hood and hat concealing his face. Ingram pulls out a pistol and gets ready to follow them ("One Percent").
In 2010, Finch confronts Ingram about the Contingency, and once again tells him that they can not be playing God. He shuts down the back door and lets the irrelevant list be deleted. Later, they agree to meet up on a ferry. Unknown to them, government assassin Hersh has brought in a terrorist named Asif, who is a suicide bomber. He has explosives in his van, and as Ingram greets Finch, the bombs detonate, killing Ingram and severely injuring Finch, which is the reason behind his present-day injuries that force him to limp and make him unable to turn his head. Finch goes to access the irrelevant list and is horrified to find that Nathan was part of it. He begins to consider that his friend's idea was not so crazy after all, and reopens the back door that allows him access to the irrelevant list ("God Mode").
Will Ingram
William "Will" Ingram (played by Michael Stahl-David) is Nathan Ingram's son. After his father's death, Will began to delve into what kind of work his father was working on.
Will managed to unearth the name of Alicia Corwin, an employee of the White House who dealt with his father. Will met with Corwin outdoors and asked her about the Machine as well as the one dollar transaction. She claimed that IFT was on the verge of bankruptcy and that Corwin struck a deal for Ingram's patents. Upon her telling him to let it go, he claimed she sounded just like his "Uncle Harold". Spooked, she ended the meeting and fled.
Unsatisfied with what he found, Will got an offer to work in Sudan to continue his health services. Accepting, he bade farewell to Finch and left ("Wolf and Cub").
Taylor Carter
Taylor Carter (played by Kwoade Cross) is Detective Joss Carter's teenaged son, he was kidnapped by Carl Elias after his mom started an investigation into him but was later freed by John Reese. He was sent away to live with his father when his mom decided to go after the HR organization.
Lee Fusco
Lee Fusco (played by Sean McCarthy) is Detective Lionel Fusco's son, he was held hostage by HR when Fusco refused to say where Carter and Reese (who had kidnapped the head of HR) were. Sameen Shaw managed to save him in time but had to sacrifice saving Fusco, but he had luckily managed to get free by himself.
Daniel Casey
Daniel Casey (played by Joseph Mazzello) is a computer hacker who becomes an asset of the Machine. After uncovering and stealing part of the Machine's code in 2010, he is labelled a traitor and becomes a target of both Decima and CIA agents John Reese and Kara Stanton. Casey is eventually confronted by Reese, who ultimately lets him escape and go into hiding.
He is later found by Root and he helps reprogram seven computer servers so that he and the other assets of the Machine can hide in plain sight when Samaritan goes online.
Jason Greenfield
Jason Greenfield (played by Michael Esper) is a computer hacker and former member of Vigilance who becomes an asset of the Machine. With Vigilance after him, the CIA fakes Greenfield's death and he is secretly held at a black site awaiting rendition. Root and Shaw manage to free him and then send him into hiding to await the Machine's orders.
He is later called upon by Root and helps reprogram seven computer servers so that he and the other assets of the Machine can hide in plain sight when Samaritan goes online.
Daizo
Daizo (played by Alex Shimizu) is an asset of the Machine who was recruited by Root. He helps reprogram seven computer servers so that he and the other assets of the Machine can hide in plain sight when Samaritan goes online.
Romeo
Romeo (played by Andreas Damm) is the leader of a group of thieves that covertly use an online dating app to recruit new members. Bored with her cover identity as a department store cosmetics counter employee, the Machine helps Shaw become the getaway driver for Romeo's gang of thieves. Despite many successful heists, she is eventually forced to quit when Romeo and his men nearly get her arrested. Later, he is found and tortured by Martine Rousseau for information about Shaw's identity.
Persons of Interest
This is a list of characters who were in danger and helped out by Team Machine. They become allies to the Team in future episodes.
Zoe Morgan
Zoe Morgan (played by Paige Turco) is a fixer who specializes in crisis management and a possible love interest of John Reese, whom Reese and Finch helped save. She returns the favor by helping them out with persons of interest that require her skills.
Morgan grew up in a nice house in Yonkers, New York. Her father was a city official who got ensnared in a corruption case. The press camped out on her lawn for weeks, only dispersing when a man the party had sent to "fix" things showed up. This was what prompted her to become a fixer. She spent the rest of her childhood in a small apartment in Queens with her mother.
When Zoe's number came up, Reese took a job as her driver to investigate. Zoe was hired by Mark Lawson, the CFO of Virtanen Pharmaceuticals, to retrieve an incriminating recording, purportedly revealing an affair. When she returned it to his men, they attempted to kill her, but she was saved by Reese's intervention.
Using her own sources, independent of Reese and Finch, Zoe figured out that the woman in the recording was Dana Miller, a former employee of Virtanen, who was going to blow the whistle about one of their drugs and was killed for her trouble. She chose to take down Virtanen with her information because she knew a girl once whose situation "kind of reminds [her] of Dana" (and only partially because they tried to kill her).
When she and Reese were captured, Zoe appeared to betray him with a kiss, but used the opportunity to slip him a paper clip so that he could escape his handcuffs. She led Mark Lawson to a navy yard, but he figured out that she had never really sent the recording to anyone else. Before he could do anything with this information, Reese showed up to rescue Zoe.
She gave Virtanen's competitor Beecher Pharmaceuticals a tip about the drug, then donated a large sum of the money she earned to Dana Miller's family for their lawsuit against Virtanen.
Zoe helps Reese and Finch again when Finch calls upon her to help him solve a case. She helps Finch and Reese clear the name of a man framed for murder. At the end of the episode, she sees Mr. Reese briefly and flirtatiously suggests he might consider buying her a drink as payment for her services.
She later comes to the rescue again while investigating a hit on a professional therapist through HR, but too late. She is the one who discovers that the therapist, Caroline Turing, is really "Root," and therefore the perpetrator, framing herself as the victim. She tells Finch to be careful because she would hate if anything happened to their "mutual friend", Mr. Reese.
After Finch is rescued from Root, Zoe once again assists Reese with his latest person of interest, Maxine Angelis, by posing to be one of John's ex-girlfriends. Maxine recognizes her and commented that every reporter in town would love to interview her. Maxine later had John call her in regards to finding out who the head of HR is and gave them information on two former Federal Bureau of Investigation (FBI) agents who are working for him.
Zoe is called upon again, but this time to pose as John's wife while living in the suburbs. While "married" to John they played poker and kept an eye on Graham Wyler.
After Zoe helps John and Harold spot an assassin targeting a maid in a hotel and the danger has ended, John invites Zoe to stay with him in the penthouse suite as Harold has bought the hotel and they can spend some time together.
Leon Tao
Leon Tao (Ken Leung) is a former forensic accountant and financial criminal, as well as a three-time person of interest, who has assisted Reese and Finch in some cases. Leon was targeted to die by the Aryan Brotherhood and the Russian Mafia and is saved both times by the timely intervention of Reese and Finch. Tao returns the favor by helping them out with persons of interest that require his skills.
After receiving a Master of Business Administration from New York University, Tao began working as an accountant at the firm of Bear Stearns, until he was downsized when the firm was sold to JPMorgan Chase when the recession hit. He then took a job at what he believed was a small start-up business, which was in fact the corporate arm of the Aryan Brotherhood, and he was laundering money from their methamphetamine sales.
The Machine produced his number after he stole eight million dollars from his employers and lost a large portion of it in the stock market. It is hard to say how much he actually lost, as he repeatedly lied about the actual amount. Tao took one million in bearer bonds and attempted to run away. He was captured by operatives of the Aryan Brotherhood, but Reese helped him escape ("The Contingency").
Tao's number came up again after he got involved in gold farming and subsequently found himself in trouble with the Russian Mafia. Reese took care of the assailants after they threw Tao through a window. Tao was then blindfolded and brought to the Library, because Reese did not know how else to protect him while he and Finch were out saving another number.
Tao eventually made himself useful by helping Finch track down the people threatening Madeleine. He also made friends with Bear by sharing a bowl of instant noodles with him. After he successfully assisted Finch and Reese, he told them that he was not the kind of guy to let them down ("Critical").
Tao posed as an emergency medical technician to help Finch and Reese "abduct" Sameen Shaw after she was supposedly murdered by Hersh ("Relevance").
Harper Rose
Harper Rose (played by Annie Ilonzeh) a drifter and opportunistic con artist who first appears as a person of interest when she tries to independently double-cross both a drug cartel and The Brotherhood. At the end of the episode "Skip", it is revealed that The Machine is starting to anonymously use her as an asset. In "Synecdoche", it's revealed that Harper has become part of a second team working for the Machine in Washington, D.C. with former persons of interest Joey Durban and Logan Pierce.
Caleb Phipps
Caleb Phipps (played by Luke Kleintank) is a computer genius, and an irrelevant number in Season 2. Originally a high school student hiding his true genius, Caleb is protected by Finch and Fusco while Reese is locked up. Finch discovers that Caleb is writing a revolutionary compression algorithm and is secretly a drug lord, but the true danger to Caleb proves to be Caleb himself who is suicidal following the accidental death of his older brother. Finch talks the young man out of suicide and he returns in Season 4 as the head of a tech company Root goes to work at. Finch and Root later attempt to steal Caleb's algorithm to save the Machine from destruction, but he provides them with it as thanks for Finch saving his life. Root also takes some of Caleb's experimental hard drives to store the Machine's core code on.
Logan Pierce
Logan Pierce (played by Jimmi Simpson) is an eccentric genius and billionaire who created a social networking site called Friendsczar. In season 2 he is an irrelevant number whom Reese is trying to protect from a potential threat. Logan proves to be surrounded by multiple people who want him dead, including his lawyer and his own best friend. After the threat is eliminated, Logan is removed as CEO of his company, but decides to go into business with his rival instead. Reese realizes that Logan knew about the threat from his friend the whole time, but didn't warn Reese as he wanted to see how Reese works. Logan presents Reese with an expensive watch as a gift that is accurate to the nanosecond, but Finch smashes it and uncovers a hidden tracking device. Finch and Reese realize that Logan is just curious enough to be dangerous, especially with his bottomless pockets and endless need to know things. Logan is reclassified by the Machine as "threat to Admin" as a result. Finch later gives the watch Logan gave them to Lou Mitchell so that he can fix it up and sell it to get the money Lou needs to buy his favorite diner.
Logan returns in season 5 where he is mentioned on the news as refusing to give the government access to his clients' personal data. Logan helps Reese get into a function by buying him a fifty thousand dollar ticket, claiming that he is "championing a new cause," but his presence during a thwarted terrorist attack and Logan's subsequent disappearance cause Reese to suspect Logan as the culprit. After Reese, Shaw and Fusco thwart a Presidential assassination, they meet with Logan, Harper Rose and Joey Durban who reveal that they are working for the Machine as another team fighting crime, the new cause Logan had previously mentioned. While Team Machine was assigned to protect the relevant number of the President, Logan's team was given the irrelevant number of Reese's cover identity and positioned themselves strategically to help out. In Logan's case, along with getting Reese into the function, he knocked out power to the house being used as the terrorists' base of operations, allowing Shaw and Fusco to take them down and anonymously alerted the Secret Service to the threat. Logan provides the team with a traffic camera picture of the missing Finch before departing with Harper and Joey to take care of their next number.
Joey Durban
Joey Durban (played by James Carpinello) is a former member of the US Army and one of the first numbers investigated by Reese. Feeling guilty for the death of a friend overseas, Joey joins a gang of robbers whose leader eventually kills the members after promising them profitable retirements. While working with the gang, Joey helps to steal the file on the murder of gang boss Carl Elias' mother and is saved from death by Reese. Joey is then convinced by Reese to leave to find a new life with his fiancé Pia. In the fifth season, Joey returns as Team Machine protects the President of the United States from an attempted assassination. Joey reveals that he has since turned his life around and married Pia, but Reese is forced to brush him off. After the assassination is stopped, Joey, dressed in an army uniform, rescues Reese and Shaw from three of the terrorists and provides them uniforms to use to escape. Joey, Harper Rose and Logan Pierce are then revealed to be working for the Machine as a second team who were sent the irrelevant number of Reese's cover identity while Team Machine were sent the relevant number of the President and depart to chase after their next number while Reese, Shaw and Fusco go after the missing Finch.
Organised Crime figures
Carl Elias
Carl G. Elias (played by Enrico Colantoni), also known as Charlie Burton, is a nascent crime boss and the illegitimate son of Mafia don Gianni Moretti. Elias is determined to revive the crime families of New York City and to eliminate the Russian mob, with the assistance of HR.
In 1981, a young Carl Elias was implied to have been living with a foster family. He was being treated for wounds sustained during a fight which broke out after he was called a bastard due to his lack of a father. He asked his foster mother if she was willing to help him research his father, but she asked him to let it go ("Flesh and Blood").
In 1991, Elias formally met his father, Don Gianni Moretti, at a restaurant. Moretti recognized his son and promised Elias a place in his organization as long as he remained loyal, tenacious, and capable.
Later, Elias found out that Moretti double-crossed him and had ordered his execution. Before he could be executed, Elias fought back. He managed to kill his executioners and escape, though he sustained a wound which later scarred his hands after grabbing razor wire meant for his throat.
Elias first showed up on Reese and Finch's radar when he had Sam Latimer have his men, including an undercover John Reese, retrieve a file titled "Elias, M." Latimer delivered the file to him, but to cover his trail, Elias had him killed and fled.
After Elias acquired the file, he found the name of the man who killed his mother and sought revenge on Vincent DeLuca. Elias went to Vincent's house and stabbed him to death with the same knife he used to kill Elias' mother, Marlene Elias. Later when Elias realized that the detective investigating the case was on his trail, he went to retired detective Bernie Sullivan's house and killed him. Unfortunately for Elias, Detective Carter was on her way, and a shootout occurred where Elias' blood was left at the scene.
Reese came into direct contact with Elias for the first time while Elias was pretending to be Charlie Burton, a school teacher trying to avoid getting killed by a rival Russian crime family. Although Elias was to be murdered, Reese was unaware that Elias was a murderer himself. He and Reese became friends while trying to escape from the men that were after Elias. When Reese and Elias succeeded, Elias turned on Reese. Elias captured one of the sons of the rival family that was after him and shot him in the leg. He then forced Reese to tie himself up and left the scene. He was later seen escaping with his men as he expressed his goal to reunite the five crime families of New York City.()
Elias was a main suspect when Detective Carter's number came in from the Machine. It was revealed he was indirectly involved by hiring her confidential informant to complete the hit. Elias left a vase of flowers on Carter's desk with a message mourning her loss and stole a file marked Elias from her desk.
Carl Elias resurfaced after Reese called him using a phone left in a trash can by Anthony "Scarface" Marconi who was impersonating a police officer.
Upon his father's release from prison, Elias ordered some of his men to kidnap his father. His plan failed when Carter and Reese intervened and took Moretti into protective custody. Reese later called Elias, requesting his help to find a child kidnapped by an Eastern European gang. Elias agreed to help but threatened the child himself once Reese had found her. Reese was forced to give up Moretti's location in exchange for the child's life, and Elias let them both go. Although Reese tried to save Moretti, Elias succeeded in capturing his father.
Elias's plan to unite the five families required him to kill the five Dons. He recruited HR, a cabal of corrupt cops, and used them to help him take down the Dons. After killing one Don with a car bomb planted by Scarface and gunning one Don down in front of Carter, Detectives Carter and Fusco took the remaining three into protective custody, barely escaping Elias's men. Elias then personally went to where they were hiding out with some HR cops and ordered the kidnapping of Taylor Carter. He was unaware however, that Harold Finch had shown Officer Simmons, an HR cop, photos of Elias's men watching the families of HR cops. The HR cops severed their ties with Elias and several clean police officers came to the hideout. Carl Elias was forced to surrender.
After being processed and put in jail, Elias made one final phone call to Gianni Moretti, telling his father he wished he could be there to see. Moretti's car was then blown up by Scarface, killing Moretti and his legitimate son Gianni Moretti, Jr. Elias hung up the phone with a slight grin.
Several weeks later, Elias received a visit from Finch who asked for his help in one of their cases involving several mafia groups and hit men. Being thankful that Finch and Reese had saved his life, Elias took advantage of his power even from behind bars and helped them. In return, he asked Finch to play chess with him.
After several more weeks of living at Rikers Island, Elias noticed that Reese had been arrested on suspicion of being "The Man in the Suit". He told Reese that he knew all about FBI's investigation and that he would help Reese in any way that he could. When Reese is dumped into the general population in an attempt to get him to reveal his true identity, he is attacked by an inmate and because Reese could not fight back, Elias broke up the fight.
When HR attempts to have the Russians kill Elias, Carter rescues him from the trap and helps him to a secure location. She keeps his presence quiet as she goes to him for information. Elias helps Carter take down HR by setting the Russians against the dirty cops, but she rejects his offer to just kill everyone for her. After Carter is murdered by Simmons, Elias pays the debt by having Simmons executed in his hospital bed as he watches.
Throughout season 4, Elias is at war with Dominic Cordell and the Brotherhood. Due to their own war with Samaritan, the Team is unable to help him much and his best friend Anthony dies. Elias is later captured by the Brotherhood along with Reese, Fusco and Harper Rose. Fusco is able to escape, but Dominic prepares to execute Elias and Harper. Elias is saved when the Machine puts Reese into God Mode, enabling him to escape and take down the Brotherhood. The Brotherhood and Elias are arrested by officers led by Fusco, but one of Elias' men intercepts his transport van and attempts to help him escape. A standoff ensues with Dominic that ends with him surrendering to Fusco moments before both Dominic and Elias are shot by a Samaritan sniper as part of the Correction, killing Dominic and leaving Elias' fate unknown.
In "B.S.O.D." its stated that Elias is dead. In "ShotSeeker", Elias' old friend Bruce Moran attempts to avenge him only to learn that Elias is still alive. Elias is revealed to be recovering at Finch's safe house where Fusco took him after saving his life. The bedridden Elias, now aware of at least Samaritan, attempts to convince Moran to go back into hiding but Moran refuses.
Over the following episodes, Elias continues to recover from his wounds, appearing to all the world like he is dead. In "Reassortment", after learning of Bruce Moran's death, Elias supplies Fusco with a lead on a case he is working. He is later confronted by Finch for his actions and tells Finch he needs to let Fusco in on the truth. In "Sotto Voce", Finch enlists Elias' help to track down the elusive criminal mastermind known as "the Voice." Elias agrees to help in exchange for going along as protection as the Team are the only friends he has left. With Elias' help, Finch tracks down "the Voice's" hideout only to learn that he's their person of interest, Terry Easton. As Easton holds a gun on Finch, Elias intervenes and threatens Easton into backing down. The two men allow Easton to drive away before Elias detonates a bomb on Easton's getaway car, killing him. Elias then implies that Finch brought him along to kill Easton as he had to know what Elias would do.
In "The Day the World Went Away", Finch's cover is blown and Elias offers to protect him while the Team goes on the offensive against Samaritan. Elias takes Finch to the apartment buildings Reese protected him in when they first met and the two reminisce. However, Samaritan figures out where they are and sends agents. With the help of Elias' men stationed through the building, the two escape outside where they find Elias' driver has been murdered. Moments later, Elias is shot in the head and killed by a Samaritan agent as he attempts to protect Finch. After being called to the scene of Elias' death, Reese removes Elias' glasses and closes his eyes as a sign of respect towards Elias. He is then approached by a gang member with information on the car that took Finch as the gang member respected Elias and Elias respected Finch. The information Reese is given enables Root and Shaw to rescue Finch.
Anthony Marconi
Anthony S. "Scarface" Marconi (played by David Valcin) is a member of Elias's mob group, and is his second-in-command and principal enforcer. He is also informally known as Scarface due to an easily identifiable scar on his right cheek coupled with the fact that his name is never mentioned by any of the characters in any episode he appears in. Marconi is mistaken by many to be a HR officer. He actually only works for Elias, and his police jacket is a disguise.
When his boss, Carl Elias, was on the run from a rival mob (the Yogorovs led by Ivan Yogorov) Marconi disguised himself as a police officer. While he gathered information on his boss' whereabouts, Harold Finch became suspicious of him and started following him, thinking that he might actually be Elias.
When Elias had finally escaped the Russian mob with the help of John Reese, Marconi rendezvoused with his boss by knocking out Fusco who was waiting to intercept Reese and Elias. Later, Marconi killed Ivan Yogorov and was with Elias as he expressed his desire to reunite the five families.
Marconi resurfaced by again impersonating a police officer and dropping a burner phone in a trash can for Reese and Carter to find. Marconi also killed a corrupt Security and Exchange Commission investigator who he'd taken into "custody".
Marconi was present when Reese met with Elias in order to get information on a kidnapper who had taken a baby. Elias agreed to help and had Marconi escort Reese to the location where the baby would be offloaded to be shipped to Eastern Europe. After Reese rescued the child, Marconi ambushed him and brought him back to Elias. Marconi personally took the child to Elias for use as leverage to get the location of Gianni Moretti from Reese. Marconi then severely injured Moretti's police guard Bill Szymanski and took Moretti captive.
Marconi was responsible for a car bomb that killed one of the five dons of the New York crime families in plain view of Reese. He later informed Elias that the remaining Dons had been taken into protective custody by Detectives Carter and Fusco. Marconi then kidnapped Carter's son Taylor from school on Elias's orders; shooting it out with Reese before escaping with Taylor captive. Marconi again fought with Reese during Reese's rescue attempt of Taylor and successfully escaped once more, though both Taylor and Moretti were rescued.
Despite his employer now being imprisoned, Marconi continued his duties and planted a bomb in a car that killed both Gianni Moretti and his legitimate son Gianni Moretti, Jr.
Simmons and Fusco later met with Marconi to have him get Elias to bring back HR; however, Marconi told them in order for that to happen they had to bring Elias the last of the original dons. Despite attempts to capture the last don, the remaining members of HR failed to do this since the don was now working with Elias.
Later in the series, he is captured, along with Elias, by Dominic ("Mini") Besson, the leader of a rival gang trying to gain control, who has killed or turned several of Elias's men and now demands the combination for a vault in Bruce Moran's office. After cautioning Dominic's second-in-command about the ability to remain in stable power and trust Dominic, he watches on as Dominic relays the numbers (which were told to him by Elias) and as Link's man opens the vault, which then triggers an explosion that blows up the entire floor they are on, killing Marconi in the process. Dominic and his second-in-command escape, but lose some of their men in the process. Later, when Elias is again captured by Dominic, to avenge Scarface's death, he plays a mind-game with Dominic to get him to believe his #2, Link, has betrayed him. Dominic kills Link before he realizes it was a trick.
Bruce Moran
Bruce Moran (played by James LeGros) Elias' accountant and close friend of both him and Anthony Marconi from boyhood. He is eventually killed by Samaritan agents and discovered by Fusco in a dumping ground for Samaritan victims. The killer is later revealed to be FBI Agent Martin LeRoux who is in reality a Samaritan agent.
Gianni Moretti
Don Gianni F. Moretti, Sr. (played by Mark Margolis) was a mob boss who was a father to both Gianni Moretti Jr. and Carl Elias. As Elias was born out of wedlock and during an affair Moretti Sr. denied he existed and even tried to have him killed when he was sought out by his illegitimate son. He was killed along with Moretti Jr. in a car bomb set off by Elias's main enforcer Anthony Marconi (a.k.a. Scarface).
Peter Yogorov
Peter I. Yogorov (Played by Morgan Spector) is the leader of Russian Mafia, who in the first 3 seasons was Elias's main enemy. The Russian Mafia was originally run by Ivan Yogorov, Peter's father, until he was killed by, Elias's enforcer, Anthony Marconi so that Elias could take over Brighton Beach. Peter was freed from Prison by HR but also has a brother, Laszlo Yogorov, who is currently still incarcerated at Rikers Island.
The Brotherhood
The following characters are involved in the Brotherhood drug gang storyline:
Dominic Besson
Dominic Besson (played by Winston Duke) also known as "Mini" (as he is quite big) is leader of the Brotherhood. When he was first introduced he was hiding as his own lackey to prove that people underestimate him. He soon becomes Carl Elias's main enemy. After Reese defeats the Brotherhood using God Mode, Dominic is arrested by Fusco and the NYPD. He is later killed by a Samaritan sniper as part of the Correction.
Link Cordell
Lincoln "Link" Cordell (played by Jamie Hector) a violent gang member who acts as Dominic's right-hand-man. He was at one point arrested by Detective John Riley (Reese's new cover identity) but Dominic paid for someone to take the fall for Link and he is released. Link is later killed by Dominic after Elias tricks him into thinking that Link betrayed Dominic.
Floyd
Floyd (played by Jessica Pimentel), another of Dominic's higher-ups, often appearing in place of Link. It is assumed that she is arrested with the remaining Brotherhood members by Detective Lionel Fusco and the rest of NYPD.
New York City Police Department
Cal Beecher
New York City Police Department Narcotics Detective Calvin T. Beecher (played by Sterling K. Brown) is a narcotics detective who works at Carter's precinct, whom Carter has begun a relationship with.
He helps Carter with the Drakes case ("Til Death"). He later asks Carter out on a date when she says that she owes him a favor for helping her with the case. She agrees to go out with him.
Beecher is Alonzo Quinn's godson and gives his godfather information ("Shadow Box"). It is unknown if Beecher is aware of Quinn's activities.
Later, when Carter receives a job offer with the FBI pending a polygraph, she is turned down because of her relationship with Beecher. The FBI tells her that Beecher is currently under investigation ("Booked Solid").
In "Trojan Horse", it is revealed that Beecher is the godson of HR leader Alonzo Quinn. Realizing Beecher knows too much about his involvement in HR, Quinn orders his godson killed. He chases after some drug dealers, only to be led into a trap, and then a shootout involving him and some HR men. The Machine produces Beecher's number too late, and he is killed within the shooting. Carter arrives at the scene and is severely devastated to find her friend dead, especially as Fusco cleared him of any involvement in Bill Szymanski's death.
Following Beecher's murder, part of Carter's motivation for taking down HR is revenge. In "Endgame", with the help of Finch, Carter is able to record Alonzo Quinn confessing to ordering Beecher's murder and he is arrested for it along with all of his other crimes.
Bill Szymanski
New York City Police Department Detective Bill Szymanski (played by Michael McGlone) is a police detective in the organized crime division at NYPD 8th Precinct who assisted Carter with a mob murder committed by Carl Elias ("Witness", "Baby Blue").
He is an honest cop, that Carter entrusted with the job of guarding Gianni Moretti, Elias' father. Ultimately, Reese had to reveal Moretti's location to Elias in order to save a person of interest, and Szymanski was severely injured in the line of duty and taken to the hospital. Fortunately, he survived ("Baby Blue", "Identity Crisis").
Szymanski fully recovered and went back to active duty where he worked on the investigation of the murder of George Massey's son, who was killed by Riley Cavanaugh ("Triggerman").
Later, Szymanski and the assistant district attorney are meeting with Alonzo Quinn about testifying against the Yogorov mob family. Since HR is in ties with the Yogorovs, Quinn kills Szymanski and the ADA in order to keep them from testifying ("All In").
In a simulated world where the Machine never existed, Szymanski is still alive and a part of the NYPD's Homicide Task Force.
Joseph Soriano
New York City Police Department Detective Joseph Soriano (played by Ned Eisenberg) is an Internal Affairs Division detective who twice investigates Lionel Fusco. He first appears in (2.20 "In Extremis") when he investigates information from a jailhouse informant that links Fusco to the disappearance of Narcotics detective James Stills. Fusco feigns ignorance about the fate of the detective, but Soriano seems to get concrete proof when cadaver dogs signal the location of a body at a spot where Fusco's vehicle was on the night of Stills' disappearance. He takes Fusco to the spot, but is shocked when he finds that someone has dug up and removed the body. With no more evidence and the recantation of the informant, he is forced to drop the investigation and Fusco is allowed to return to duty.
(5.1 "BSOD") Soriano is part of a joint investigation with FBI Special Agent Martin LeRoux that is looking into Fusco's version of the circumstances of the deaths of Dominic and Carl Elias. LeRoux, a Samaritan operative, doctors the official reports and Fusco is cleared of any wrongdoing. Visibly upset with the verdict, Soriano is last seen leaving the 8th Precinct. Later, another IAD detective tells Fusco that Soriano has suddenly died of a heart attack. It is implied that he was actually killed by Samaritan to prevent any further investigations into events that transpired during the "Correction."
Kane
New York City Police Department Homicide Detective Kane (played by Anthony Mangano) is a police detective who sometimes shares his cases with Carter.
Dani Silva
New York City Police Department Detective Dani Silva (played by Adria Arjona) is an Internal Affairs Division detective who is undercover trying to find a mole in the NYPD Cadet program when she becomes a Person of Interest. She finds the mole, but is framed for her handlers' murder at the hands of a criminal gang called The Brotherhood. Reese is able to help her clear her name. She later transfers into the NYPD Gang Division, where she and Fusco work together to investigate another Person of Interest.
HR
The following characters are involved in the HR storyline, in which a group of corrupt police officers work in collaboration with an up-and-coming mob boss to control organized crime in New York.
Alonzo Quinn
Alonzo D. Quinn (played by Clarke Peters) is a political adviser for New York City councilman and mayoral candidate Ed Griffin (Richard V. Licata) and the leader of HR. Quinn's leadership is subtle enough that even Harold Finch only suspects that HR still has someone in charge.
With many of his subordinates arrested by the FBI, Quinn tricks a local reporter, Maxine Angelis, into thinking that Christopher Zambrano is the head of HR. This results in Zambrano's death and Angelis' career destroyed. Quinn however leads his candidate Ed Griffin to an even greater victory by framing Landon Walker as the leader of HR and restoring Angelis' career in the process ("Bury the Lede").
After the political victory, Quinn becomes Ed Griffin's chief of staff in the Mayor's Office. Quinn later asks his number two man, Patrick Simmons, to try and reforge HR's alliance with Elias by giving them mafia don Luciano Grifoni. This, however does not go as planned when it turns out Grifoni is working with Elias now and has one of his new subordinates killed declaring that Elias is done with HR ("C.O.D.").
Quinn starts talking to his godson Detective Cal Beecher for some information with Patrick Simmons guarding him. Quinn meets with Simmons to inform him of his plans to make an alliance with Russian mobster Peter Yogorov. Since Elias refuses to work with them, Quinn decides to strike a deal with the Russian Mafia in order to secure funds for the rebuilding of HR ("Shadow Box").
Quinn is eventually identified to Carter as the head of HR by a dying Raymond Terney. ("The Perfect Mark") Pretending to give up on the hunt for Cal's killer, Carter uses Finch's phone cloning program to clone Quinn's phone and spy on and record him as she prepares to bring HR down. Carter eventually tricks Quinn into confessing to having Cal murdered which Finch records through Quinn's phone. Taking Quinn into custody, Carter flees with the help of Reese, chased by HR. ("Endgame") Unable to trust the NYPD, Carter works with Reese to bring Quinn to the FBI across the city with HR, led by Simmons, hunting them. Despite HR's intervention, Carter is eventually able to get Quinn to the FBI and he is taken into custody as the head of HR, leading to the end of HR. ("The Crossing") Quinn is subsequently put into protective custody by US Marshalls, but is hunted by Reese for Simmons' location after the murder of Carter. Reese is able to force Quinn to give up Simmons' escape plan, but his injured state keeps Reese from killing Quinn before he collapses. Quinn is left alive and in custody to face punishment for his crimes while Fusco is able to use the information Quinn gave Reese to hunt down and arrest Simmons. ("The Devil's Share")
Patrick Simmons
New York City Police Department Officer Patrick M. Simmons (Robert John Burke) is a uniformed officer who is the right-hand man to Quinn; he handles HR activities on the street level ("C.O.D.").
Simmons was an old friend of Detective Fusco, and in collusion with Elias until learning from Finch that HR families were under Elias' surveillance. He also periodically meets with Fusco ("Flesh and Blood") and blackmails Fusco into working with HR ("Firewall"). Simmons has also given an anonymous tip to Detective Carter that another cop murdered Internal Affairs officer Ian Davidson, also an HR cop, as to increase the pressure on Fusco.
He is later strangled to death by Scarface, as Elias watches on. ("The Devil's Share")
Artie Lynch
New York City Police Department Captain Arthur 'Artie' Lynch (played by Michael Mulheren) was a major figure in HR with whom Fusco must appear to be working. Reese forced Lynch to deliver a message to Elias to "back away from Carter or else Reese would kill him" ("Get Carter").
A man named Andre Wilcox who was receiving protection for HR, not Elias, met with Lynch to arrange for the release of one of his men named Brick from jail. Despite being reluctant, Lynch agreed to do what Andre wanted. Lynch met with Captain Womack, the head of the precinct where Brick was being held, and told him to release Brick, which the captain did because he was a co-conspirator of Lynch ("Wolf and Cub").
Lynch is killed by Fusco when Lynch is about to kill Reese ("Matsya Nyaya").
Womack
New York City Police Department Captain P. Womack (played by John Fiore) is the captain in charge of Homicide and Carter and Fusco's supervisor. Womack protects members of HR when Carter gets too close. Reese blackmails him into transferring Fusco to Carter's precinct.
Despite first appearing in the background ("Cura Te Ipsum"), he did something of a more credited role in which he invited CIA agent Mark Snow, Tyrell Evans and Carter into his office to talk about Reese ("Number Crunch"). He appears to be a friend and known contact of Artie Lynch, when Lynch went to talk to him about doing something for him, meaning the captain is most likely corrupt. This is later confirmed when Womack is revealed to currently be incarcerated with other HR cops ("C.O.D.").
Raymond Terney
New York City Police Department Detective Raymond Terney (played by Al Sapienza) is a NYPD police detective who has been shown to work with Detective Carter on more than one occasion. He always seems to be very calm and polite.
He was investigating the murder of Vincent DeLuca, a former henchman of Don Gianni Moretti, Carl Elias' father ("The Fix").
Later, HR boss Alonzo Quinn is meeting with Detective Bill Szymanski. When he hears Szymanski say they're testifying against the Russian Yogorov mob family, Quinn kills him and the assistant district attorney to avoid it. Terney, now revealed to be part of HR, comes in, and Quinn tells him to shoot him in the right shoulder so he can survive, and make it look like someone shot all of them ("All In").
Later, Carter and Terney are investigating a place supposedly told to have been an HR hideout. Carter shoots an armed suspect, but someone takes the gun away in order to frame Carter. In the police station, Terney reveals his true colors and threatens to kill Carter if she meddles in their business ("Zero Day").
Terney and Peter Yogorov are preparing to kill Carl Elias in a dark forest. A masked Carter comes out and wounds Yogorov. Terney begs profusely for his life, and Carter knocks him out and rescues Elias ("God Mode").
Terney eventually finds out that Carter wants to take down HR when he discovers she turned Laskey. A shootout occurs, with Laskey dying instantly, and a dying Terney revealing to Carter that Alonzo Quinn is the head of HR ("The Perfect Mark").
Mike Laskey
Michael "Mike" Laskey, born Mikhail S. Lesnichy (played by Brian Wiles) is a rookie cop affiliated with HR who is installed as Carter's new partner after she is demoted to officer for getting too close to HR. She turns him by threatening to frame him for the death of another dirty cop. While initially only helping because he's forced to, Laskey comes to see the truth about HR and aids Carter and the Team in stopping a money laundering scheme. In "The Perfect Mark", Laskey follows Simmons around to try to identify the head of HR. He is killed in a shootout with Raymond Terney, but his pictures are used by Terney to identify the head of HR to Carter.
James Stills
James Stills (played by James Hanlon) is a Narcotics detective from the 51st Precinct. In (1.1 "Pilot"), he is shown to be the ringleader of a group of corrupt HR officers that includes Lionel Fusco. When he tries to frame an innocent man for murder, Reese kills him with Fusco's service pistol and has Fusco bury his body.
(2.20 "In Extremis") When Fusco is investigated by Internal Affairs in connection with Stills' disappearance, his previous relationship with Stills is explored through flashbacks. In 2004, Stills helped Fusco get back on his feet after Fusco's alcoholism destroyed his marriage. In return for his kindness, Fusco reluctantly began participating in Stills' corruption.
Azarello
Azarello (played by Louis Vanaria) is a Narcotics detective from the 51st Precinct who is part of Detective James Stills' crew. He is arrested for corruption and attempted murder at the end of (1.1 "Pilot").
(2.20 "In Extremis) Flashbacks show Azarello as a long time willing participant in corruption with his partner James Stills. In 2013, he attempts to get his prison sentence reduced by giving information to NYPD Internal Affairs about Lionel Fusco's involvement in Stills' disappearance. Joss Carter eventually enlists help from mob boss Carl Elias, who is able to use his resources to pressure Azarello into recanting.
Federal Bureau of Investigation
The following characters are involved in the pursuit for "The man in the suit" story-line.
Nicholas Donnelly
Federal Bureau of Investigation (FBI) Special Agent Nicholas Donnelly (played by Brennan Brown) was a federal agent who becomes interested in Reese when one of his cases crosses one of Reese's. Donnelly's intent was to expose the CIA for their illegal actions and crimes and track down "The Man in the Suit". He periodically offers Carter the opportunity to work with him as he pursues Reese.
A native of Roanoke, Virginia, Donnelly studied law at Northwestern University. In August 1998, he joined the FBI after graduating from the FBI Academy with honors. He began with the Miami, Florida field office in counter-terrorism until he distinguished himself as an investigator and became a highly sought-after agent. In subsequent years, he moved from Miami to Portland, Oregon in 2000, then to San Diego in 2003. He was promoted to supervisory special agent for the Boston Field Office in 2007 and worked there until 2011. He was then transferred to New York City ("Prisoner's Dilemma")
.
He first appears investigating Scott Powell for the murder of Congressman Michael Delancey. He first encounters The Man in the Suit when Reese breaks out Scott Powell from the FBI's custody. Although Powell was later proven innocent, Donnelly started suspecting Reese of being a mercenary ("Root Cause"). Donnelly believed that Reese was selling his services to the highest bidder and working for Elias ("Identity Crisis").
Donnelly informs Carter that the FBI has information about DNA that ties Reese to a case involving smugglers ("Blue Code") and a cold case from 2011 in New Rochelle, New York. Donnelly invites Carter to assist on the investigation, and she accepts. In New Rochelle, he discovers the fact that the victim, Peter Arndt, was in debt to loan sharks, and he hypothesizes that the loan sharks had hired Reese to kill him ("Many Happy Returns").
Donnelly offers Carter a temporary role with the FBI when he finally manages to track Reese's phone to a bank. He tells her about his new theory that Reese is working for CIA Agent Snow and is receiving aid of some sort from China to help him further. He then makes his way to the bank with several armed FBI officers and Carter, and arrests four men wearing suits, one of them being Reese. When he asks Carter if she recognizes any of them as "the man in the suit", she tells him that she does not ("Shadow Box").
After getting Carter to interrogate the suspects, Donnelly finds out Carter was conspiring against him. He arrests Reese and Carter, planning on taking them to a safe house and incarcerating them shortly after. The Machine, however, identified him as a Person of Interest and informed Finch. Finch tried to warn Donnelly but was too late. Kara Stanton intercepted them, killing Donnelly and kidnapping Reese ("Prisoner's Dilemma").
After the events of "Dead Reckoning", Donnelly's former partner reveals to Carter that they have identified the "Man in the Suit" as Mark Snow, following his death at the hands of Kara Stanton ("Dead Reckoning").
Brian Moss
Federal Bureau of Investigation (FBI) Special Agent Brian Moss (played by Brian Hutchison) is an FBI Special Agent in charge (SAIC) of investigating the death of Nicholas Donnelly. He incorrectly came to the conclusion that Mark Snow was "the man in the suit". He then tried to recruit Carter to the FBI but had to reject her due to her relationship to Calvin Beecher, who was under investigation at the time for his involvement with HR. He later appears in "Proteus" to bring Carter a series of missing persons files she requested which, unknown to him, were the victims of an identity-stealing serial killer.
Martin LeRoux
Martin LeRoux (played by David Aaron Baker) is an FBI Special Agent, Samaritan operative and contract killer. He first appears in (5.1 "BSOD") when he is part of a joint investigation with NYPD Internal Affairs into Lionel Fusco's involvement in the deaths of Dominic and Carl Elias. To protect Samaritan's role in the "Correction", he eventually presents a drastically different chain of events than were initially in Fusco's report and then proceeds to allow Fusco to return to active duty.
(5.12 ".exe") LeRoux reappears after the NYPD discovers the bodies of the missing persons at the demolition site. He grills Fusco about evidence he collected, but Fusco feigns ignorance. LeRoux kidnaps Fusco and reveals that he is actually the one who murdered all of the missing persons, working at the behest of Samaritan. He tries to shoot and kill Fusco at a secluded beach, but is overpowered when it is revealed that Fusco is wearing a bulletproof vest. Fusco debates whether to arrest him or kill him. In the series finale, Fusco claims that he allowed LeRoux to live and has him locked up in the trunk of his car.
The Government
The following characters are tied to a government conspiracy related to the development and use of the Machine.
Alicia Corwin
Alicia M. Corwin (played by Elizabeth Marvel) is a liaison between Ingram and the government while the Machine was being developed. She was the deputy Assistant to the President for National Security Affairs (APNSA) and as a former member, she is one of the few to know of the Machine's existence.
Corwin met with Nathan Ingram to determine the progress of the Machine. Ingram gave her a social security number associated with a DIA agent.
Later, she introduced Ingram to Denton Weeks who told Ingram that the DIA agent was found to be a traitor. Weeks was curious as to how the Machine actually worked, and was unhappy when he was told that the Machine could not be used to track specific people. He threatened to reduce the payment for the Machine, but Corwin explained to him that Ingram was building it merely for one US dollar ("Super").
Alicia met with Nathan in a bar to discuss the transfer of the Machine to its new location. They discussed dissemination and she assured him that nobody would be able to track the data back to the Machine or to Ingram. Yet she was visibly shaken and told Nathan that she was happy to return to her day job once everything related to the Machine was settled ("No Good Deed").
Corwin was in Morocco, where she met with CIA agents Mark Snow, John Reese and Kara Stanton. She gave Reese and Stanton orders to retrieve a stolen laptop with secret software from Ordos, China. In reality, whether she knew it or not, the whole operation was nothing more than a ruse to check and make sure that an advance team had eliminated all the software engineers related to The Machine as well as successfully moving the Machine to another location ("Matsya Nyaya").
After Ingram died, Corwin quit her job at the government and moved to Green Bank, a small town in West Virginia that does not have cell phones or wireless internet ("Wolf and Cub").
Will Ingram, eager to know what his father was working on before he died, tracked Alicia down and questioned her. During their conversation, Will referred to his "uncle Harold". Alicia seemed unnerved at the mention of his name and abruptly ended the conversation. She told Will nothing about her knowledge of the Machine ("Wolf and Cub").
Sometime after meeting with Will, Alicia was being contacted by a member of National Security Agency (NSA), Henry Peck who was being attacked by government assassins trying to silence him about The Machine. Peck called Alicia numerous times trying to get information about "the machine". Though they never met in person, Alicia spoke to Peck over the phone. Not answering any of his questions she only mentioned "Sibilance" and told him to "run". Apparently through following Peck, Alicia managed to listen in a conversation between Peck and Finch. She was visibly surprised when Finch told Peck that the machine existed and that he had built it ("No Good Deed").
Alicia succeeded in tracking down Harold and observed him leaving the Library. She broke into the building and discovered Finch's headquarters. Not realizing what exactly she was looking at, she seemed overwhelmed seeing the Irrelevant List ("Firewall").
Alicia finally found Finch in his car waiting to pick up John Reese and Caroline Turing. Climbing into his car and confronting him at gun point, she stated that Nathan was afraid and stressed about what they had built, and that he was killed by the machine. As guilt was starting to weigh down on her, she claimed she could feel it watching and listening to them at that moment. Alicia told Finch that she was tired of running.
Finch told her that she was not running from the machine, but she was running from people they had both trusted. Alicia said he was right, but that it was a good thing she found Finch first. Moments later, she was killed with a gunshot to the head by Root. She was later found dead at the meeting point by Reese ("Firewall").
Corwin's murder attracted much attention from different angles. Reese asked Carter and Fusco to look into the case and find out why Corwin was in New York ("The Contingency"). Mark Snow also began an independent investigation under the coercion of Kara Stanton. They all hit a dead end because at the same time Special Counsel and Denton Weeks conspired to corrupt the investigation when they feared that the investigation into Corwin's death might uncover their secret activities ("Bad Code"). Their fixer, Hersh, managed to clean up the majority of the evidence, except for a few objects Fusco removed from her apartment prior to his arrival. Hersh also removed a radio-frequency identification (RFID) chip from the right shoulder of Corwin's body that had been planted in her and could be the reason why she had been running and hiding ("Masquerade"). Special Counsel later receives a report stating that the chip was linked to Decima Technologies, the group that was trying to take control of the Machine. ("God Mode")
Denton Weeks
Denton L. Weeks (played by Cotter Smith) was the official who commissioned the development of the machine. He worked for the President for National Security Affairs (APNSA) and is Alicia Corwin's supervisor. He was one of the few people to know of the Machine's existence. He is in league with the Special Counsel.
At an unspecified point in time, Weeks was appointed head of a task force on Privacy and Information, reporting directly to the Chief of Staff. Prior to that promotion, he had already been working in corporate law for a decade as a member of the White House legal team and a specialist in security policy ("Bad Code").
Weeks and Corwin arrived unannounced at Nathan Ingram's office to question him about the Social Security number Ingram gave to Corwin in a previous meeting. Demanding an explanation how a computer program could spot a traitor when federal agents could not Weeks tried to gain more knowledge about how the machine worked. He appeared slightly agitated when Ingram did not answer his questions but had to put up with it when he was informed about the price negotiated for the project. He did not know that the Machine had already evaluated him as a possible threat. Finch later revealed to Ingram that Weeks had been unsuccessfully trying to hack into the Machine via NSA feeds for the past six months ("Super").
Following the murder of Alicia Corwin by Root, Weeks met with Special Counsel to discuss further steps. They just started to remove all evidence when Weeks received a message from his mistress about an emergency. Not knowing that Root had set it up to lure him to his getaway lodge, he drove there and got knocked out by Root with a sedative ("The Contingency"). Root then tortured him using Palestinian hanging, an enhanced interrogation technique that was once authorized in a top-secret Department of Defense memo by Weeks, trying to get him talking about the whereabouts of the Machine.
In a suitable moment when Root was away to gas up the car, Weeks recognized Finch and convinced him to help him escape. Weeks managed to free himself and attacked Root when she returned. He beat her up and then turned her gun on Finch threatening to shoot him unless he told him what he knew about how to access the Machine. Just as planned by Root, however, the gun did not fire and Root first tased Weeks and eventually killed him by shooting him in the chest ("Bad Code").
Special Counsel
Special Counsel (played by Jay O. Sanders) is a shadowy figure from the United States Office of Special Counsel and one of the original eight people who know about the Machine. He is in a position which is filled by Presidential appointment, followed by Congressional confirmation. He appears to be the one engineering the activity regarding the Machine, and sees Reese as a threat.
He sent out a team of hit men to assassinate Henry Peck after he came too close to finding out about the Machine ("No Good Deed").
After Alicia Corwin was murdered, he conspired with Denton Weeks and Hersh to cover-up the case and to make sure that there are no connections to them ("The Contingency"). When Weeks went missing, he sent Hersh to look for him, but Hersh only found him dead ("Bad Code").
He later sent Hersh to kill the "Man in the Suit" but redirected him to the FBI when it appears the FBI captured him. When asked about all four suspects, he tells Hersh to kill them all ("Prisoner's Dilemma").
After Hersh's failed attempt to kill The Man in the Suit at Rikers Penitentiary, he once again instructs Hersh to find Reese and kill him. This time, Hersh loses in a fist fight with Reese and after recovering in the hospital, Special Counsel calls him in for more important matters. He then begins drafting a letter with his new secretary, who is revealed to be Root ("Booked Solid").
He meets with Shaw and she trades him her partner's evidence on the Program in exchange for calling the hit off on her. He agrees and Shaw kills her treacherous handler before leaving. Later, Hersh poisons Shaw anyway ("Relevance").
In "God Mode", he arrives with Hersh and two other men to confront Finch about the Machine. Reese, Finch, Root, and Shaw leave, and someone calls Counsel. It comes from an unnamed woman who is at a higher rank than Special Counsel. Hersh takes the phone, and the woman orders him to seal the room. Hersh kills Special Counsel and leaves.
Hersh
Hersh (played by Boris McGiver) was Special Counsel's enforcer and fixer in the clean-up of Alicia Corwin's murder. He also worked for Control, the head of the ISA.
After Alicia Corwin was found dead and the investigation was handed over to the NYPD 8th Precinct, Hersh was sent there to make sure that "the investigation is a dead end". He stole Corwin's case file and corrupted digital records as well as the ballistics report ("The Contingency").
While still investigating the Corwin murder, Hersh is dispatched to investigate the disappearance of Denton Weeks. Suspicious of Hersh's presence and behavior, Fusco uses Finch's phone cloning program to listen in on Hersh's cell phone and follow him around. From information overheard on one of Hersh's calls, Fusco discovers a clue to where Root was holding Finch ("Bad Code").
To clean up loose ends, Hersh went to Corwin's hotel room and collected personal effects and later gained access to the cold storage room at the morgue where Corwin's body had been examined to remove an RFID chip from under her skin ("Masquerade").
Parallel to corrupting the Corwin investigation, Special Counsel also sent Hersh to track down Denton Weeks, who had disappeared on private business and was later found dead after Root shot him ("Bad Code").
When Special Counsel learned of Reese's recent arrest, Special Counsel instructed him to kill him and the other three people arrested. Following the instruction, he pulled out a gun in front of a group of policemen and shot into the air several times. He was arrested and taken to Rikers Penitentiary where he managed to kill Brian Kelly, one of the four men who were arrested on suspicion of being "The Man in the Suit". He attempted to kill Reese too, but Elias stopped him before he had a chance to do so ("Prisoner's Dilemma").
Hersh is freed from Rikers and tracks Reese to a hotel when he saves the latest person of interest. Hersh and Reese engage in a brief fight with Reese defeating Hersh.
Hersh is then seen receiving a call from the Special Counsel telling him there is a situation in Washington, D.C. The Special Counsel then calls in his newly hired assistant Ms. May, who unknown to Special Counsel, is the hacker Root ("Booked Solid").
His next appearance is in New York, poisoning Sameen Shaw. However, with the help of Leon Tao, the team is able to fake her death ("Relevance").
Hersh leads the effort to prevent Decima Technologies from taking control of the Machine. He later confronts Finch, Reese, Shaw and Root at gunpoint in the Hanford Nuclear Reservation where the Machine had once been stored. After they leave, Hersh is ordered by Control to kill everyone in the room, including Special Counsel. He is also revealed to have orchestrated the ferry bombing that killed Nathan Ingram and left Finch with his permanent injuries. ("God Mode")
Hersh is later sent by Control to kill Root in a psychiatric hospital. With the help of the Machine, Root escapes and wounds Hersh but spares his life on orders from the Machine. ("Lady Killer")
After being exposed, Control calls in Hersh in her attempts to get either the Machine or Arthur Claypool's machine Samaritan. With the help of Root, the team escapes, but Hersh captures Root who is taken for torture by Control. Hersh then leads a SWAT team to the bank where Vigilance is trying to retrieve the Samaritan drives. When Peter Collier refuses to cooperate, Hersh and his team storm the bank, killing several Vigilance members before one blows himself up with a grenade, leaving Hersh's fate unknown. ("Alethia")
Hersh survives and is later tracked down by Shaw hunting another relevant number. Shaw drugs Hersh for information about why the government is after Owen Matthews and leaves him alive. Before she departs, Hersh asks if Shaw's new employers are treating her well, showing concern for Shaw who simply tells him that they haven't tried to kill her and leaves. ("4C")
After Vigilance abducts Control and several other important people, Hersh arrives at the hotel where he enters a standoff with Reese and Shaw. After Root tells them Hersh knows where to find Finch, Reese and Shaw convince him to work with them to save their respective bosses. Hersh then leads them to Decima's hideout only to find it empty and that Peter Collier is running a kangaroo court where everyone is on trial for their connection to the Machine. ("A House Divided")
After a brief gunfight with Decima operatives, Hersh, Reese and Shaw set out together to find the courthouse. When Shaw needs to get to Root fast, Hersh helps her steal a bike and continues on to a destination specified by Root. There, they meet up with Detective Fusco who brings them Bear and news of a Vigilance operative looting nearby stores. Hersh and Reese are able to trick the operative into giving them the courthouse location before turning him over to Fusco. Near the courthouse, they find a Vigilance guard post slaughtered and Finch testifying on TV about the history of the Machine. Finally reaching the courthouse, Hersh and Reese separate, acknowledging they will likely be enemies the next time they meet. In the basement, Hersh finds a bomb set to go off when the power is restored and sets to work disarming it, refusing help from Reese and telling him to focus on rescuing Finch. Hersh is severely wounded in a gunfight with Decima operatives, but continues his efforts to disarm the bomb. Before he can cut a last wire, the power comes back on and the bomb detonates, killing Hersh, the people kidnapped for the trial and the emergency responders.
The next day, Control receives Hersh's final autopsy report. During her voiceover, Root states that a lot of people who could've helped them against Samaritan will die as the report is shown, indicating Hersh to be one of those people. ("Deux Ex Machina")
Control
Control (played by Camryn Manheim) is the alias of the head of the ISA's operation (code-named Northern Lights) regarding the Machine. She is a forty-year-old single mother with a ten-year-old daughter, Julia. Control is shrewd but ruthless and combat-minded.
Control is first mentioned in (2.22 "God Mode") when Special Counsel calls her from the Hanford Nuclear facility where the Machine was formally stored. She speaks to Hersh and orders him to "seal" the room. He confirms her orders and executes everyone who is present including Special Counsel. Later, Hersch is seen talking to a unseen woman in a black town car assumed to be Control.
She first appears in (3.11 "Lethe") posing as the wife of Arthur Claypool, Samaritan's creator, who is suffering from memory loss due to a terminal brain tumor. After escaping Vigilance, Claypool remembers the death and burial of his wife, which exposes Control as an impostor.
(3.12 "Aletheia") Control brings her ISA team into the room to attempt to coerce access to the Machine or Samaritan out of either Finch or Claypool. She orders Hersh to execute Shaw, but Root crashes the party and is captured by Hersh instead while the others escape. Control recognizes Root as the analog interface for the Machine and brutally tortures her for its location, going so far as to deafen Root in her right ear. The Machine helps Root get the upper hand and escape.
(3.16 "Ram") In 2010, Control is angry to learn that CIA agents Reese and Stanton failed to retrieve Daniel Casey's laptop and that it has been traced to Ordos, China. She orders Special Counsel to have the CIA send Reese and Stanton to confirm its location and then be killed in a drone strike as a punishment.
(3.22 "A House Divided") Control is present at a meeting with other high level government officials to discuss the possible implementation of Samaritan. She becomes one of the hostages taken by Vigilance.
(3.23 "Deus Ex Machina") After Sen. Garrison makes her out to be the main conspirator, Control is forced to give testimony in Vigilance's mock trial. She refuses to cooperate, saying that as a government agent she won't confirm or deny anything that is asked of her. Collier prepares to shoot her, but she is saved by Finch who is willing to talk. Eventually, Reese and Hersh assault the mock trial and the hostages are rescued by Decima agents. Control and Garrison are evacuated. She is present when Garrison allows Samaritan to go online.
(4.12 "Control-Alt-Delete") Control is shown to now be the head of Samaritan's Research into relevant threats. She tracks a terrorist cell in Detroit, but when one suspect, Yasin Said, escapes, she is denied access to all the evidence by a liaison Samaritan operative. She attempts to go above his head by getting Sen. Garrison and Greer involved, but is told to back off. She secretly contacts Devon Grice to find and interrogate Said. He gives her information that the Said will be attempting to flee to Canada on a freight train. Control and her bodyguards find Said, but he escapes when Control is captured by Reese and Root. They interrogate her in a warehouse about Shaw's location and the battle at the Stock Exchange, but it quickly becomes apparent that she has no knowledge of the events and is merely a puppet for Samaritan. She is rescued by Grice and the ISA. She travels alone to Canada and finds Said. He proclaims his innocence and it seems likely that he was set up by Samaritan, but she executes him anyway. Later, she travels to the supposed scene of the battle at the Stock Exchange. Everything seems normal, but Control notices wet paint on a wall.
(4.21 "Asylum") In Washington, D.C., Control captures and interrogates a female Samaritan handler. In the handler's schedule book, Control finds mention of an event referred to as the "Correction." The handler refuses to give up any information, so Control executes her.
(4.22 "YHWH") Control, with the help of Grice, seems to uncover that the "Correction" is a terrorist plot against the Supreme Court. Since the attack was not presented as relevant through Research, Control suspects that it is Samaritan-sanctioned operation approved by John Greer. She sends Grice to the courthouse to investigate. Control confronts Greer at gunpoint, certain that she uncovered his plot. However, the "Correction" is revealed to be a purge of persons seen as threats to Samaritan. Grice is executed for his previous betrayal and Control is black-bagged by Greer's operatives and taken away to an unknown fate.
Control considers herself the ultimate patriot. In ("Control-Alt-Delete"), she proudly exclaims that up until that point she has personally sanctioned the executions of 854 individuals whom she evaluated as detrimental to national security.
Senator Ross Garrison
Senator Ross Garrison (played by John Doman) is a U.S. Senator and one of the original eight people who was aware of the Machine's existence.
He first appears in (3.19 "Most Likely To...") where he speaks to Control about his worries of potential legal ramifications if the Machine's existence was ever made public. After Operation Northern Lights is exposed by Vigilance, he denies any knowledge during a press conference. Later, he orders Control to shut down the program.
(3.20 "Death Benefit") Garrison meets with John Greer and he is convinced to give Samaritan a 24 hour test period in the five boroughs of New York City.
(3.21 "Beta") Garrison demands and later receives a relevant number from Greer, but he remains unaware that the main goal of the Samaritan beta is to track down Harold Finch.
(3.22 "A House Divided") While in a meeting to get approval for Samaritan, Garrison is one of the hostages captured by Vigilance and taken to their kangaroo court proceedings.
(3.23 "Deus Ex Machina") Garrison gives a brief testimony in the "trial", deflecting most of the blame to Control. Through the intervention of Reese and Hersh, Garrison is eventually rescued by Decima agents and he is evacuated with Control. After a bomb in the post office goes off and Vigilance is falsely implicated, Garrison readily agrees to Greer's demands for complete control and he allows Samaritan to go online.
Garrison makes periodic appearances in Season 4 to check on the status of Samaritan. (4.12 "Control-Alt-Delete) He chastises Control for claiming that Samaritan representatives are purposely trying to derail her investigation into a terrorist cell.
(5.12 ".exe") In a simulated world where the Machine never existed, Samaritan goes online as the government's surveillance program anyway. Garrison meets with John Greer to discuss his dissatisfaction with Decima's practices. After Garrison leaves, Greer orders his trusted assassin (Root) to kill him.
(5.13 "return 0") At a secret meeting following the destruction of Samaritan, Garrison attempts to blame the ICE-9 virus attack on the Chinese. However, a government official calls his bluff and blames Operation Northern Lights. Garrison absolves himself of complicity by claiming that Operation Northern Lights was shut down years before.
Devon Grice
Devon Grice (played by Nick Tarabay) is a Crimson 6 agent who was trained by Sameen Shaw herself when she was still working as an ISA operative. When they meet in New York whilst both on different ends of the same mission, Grice lets her live. When a Samaritan representative refuses to allow Control to review the contents of a relevant threat's laptop, she secretly enlists Grice's help to continue the investigation. When Control discovers the impending "Correction" and labels it as a Samaritan-sanctioned terrorist attack, she has Grice investigate the supposed target: the Supreme Court. When the "Correction" is revealed to be a purging of persons seen as threats to Samaritan, Grice is executed by an operative for his previous betrayal.
Brooks
Brooks (played by Theodora Miranne) is another Crimson 6 agent and Grice's partner.
Cole
Cole (played by Ebon Moss-Bachrach) is Shaw's partner at the ISA. (2.16 "Relevance") Cole uncovers evidence that he and Shaw have been assassinating innocent people. He and Shaw are set up by their handler and Cole is killed for learning to much about "The Program". Later, Shaw is shown to be watching over Cole's parents.
(5.12 ".exe") In a simulated world where the Machine never existed, Cole is shown to have survived and to still be partnered with Shaw.
Henry Peck
Henry Peck (played by Jacob Pitts) is a former NSA Analyst and Person of Interest. (1.22 "No Good Deed") When Peck discovers the existence of the mass surveillance program, "Operation Northern Lights", an ISA team is sent to kill him. Peck still refuses to stop investigating, so Finch is forced to tell him the truth and give him a new identity so he can escape the threat. Alicia Corwin secretly records their conversation and learns of Finch's involvement.
(5.12 ".exe") In a simulated world where the Machine never existed, Peck uncovers the existence of Samaritan. He presents his evidence to a member of the Office of Special Counsel, revealed to be Shaw. After she verifies that Peck hasn't told anyone else about his theory, she assassinates him.
The CIA
The following characters are part of Reese's back story relating to his time with the CIA.
Mark Snow
Mark Snow (played by Michael Kelly) is the alias of a CIA operative, partnered with Tyrell Evans and once worked with John Reese, and is now trying to find and kill him. He once claimed to be Reese's best friend ("Number Crunch").
In 2008, Snow was with CIA agents John Reese and Kara Stanton, operating in New York City. They had a government employee in their custody who had committed treason by attempting to sell some software to the Chinese. Positive they were not going to get a call regarding their prisoner, he allowed Reese to head out to the city for some rest ("Blue Code").
In 2010, Snow was in Morocco while Reese and Stanton were interrogating a suspect. Alongside Alicia Corwin, they gave orders to Reese and Stanton to go to Ordos to retrieve a laptop containing information pertaining to a computer virus that could disable nuclear programs. Snow had been ordered to tell both agents secretly that the other was compromised and retire each other. The plan, however, failed when Reese realized they'd been set up after Stanton shoots him, and tells him it was because he was compromised. Reese then tells her that he was ordered to do the same thing to her. They realize they were set up and escape just minutes before a gunship shelled their supposed extraction point ("Matsya Nyaya").
When Reese's fingerprints were run through the system after he is arrested, the CIA was alerted to the fact that Reese had survived the shelling in Ordos. Snow visited the precinct where Detective Carter worked and asked her a few questions about Reese. Both Snow and his partner began to follow her, but she eventually caught them. Snow had a talk with Carter at a diner afterwards in which he showed Carter that Reese was a cold blooded killer. He convinced her to set a trap to capture Reese, though his true intention was, in fact, to have him killed. However, the trap failed as his partner Evans went for non-lethal shots, allowing Reese to escape with the help of Carter to Finch's car ("Number Crunch").
In 2012, Snow, along with his CIA partner, initiated surveillance on Carter, believing she helped Reese escape. He constantly had operatives tailing her, though they were unable to track her effectively, and she managed to lose them. Later, he received information that Reese had escaped to northern Connecticut where his fingerprints were found on a prescription bottle at a veterinary clinic (planted by Lionel Fusco at Finch's request), and immediately left New York ("Super").
Snow eventually returned to New York and confronted Carter, who he was still suspicious of after realizing Reese was never in Connecticut. Later, Snow had to bail out a CIA operative known as L.O.S. who was selling drugs for the CIA in a scheme to fund the War on Terror. Bluntly telling L.O.S. that he warned him about getting caught "behind enemy lines", he had his head bagged and presumably killed in the back seat of an SUV ("Blue Code").
When the FBI established a task force to catch Reese, Special Agent Donnelly informed Carter that he did not just want to apprehend Reese, but also expose Snow and the CIA's illegal operations. He claimed that Snow was the man who swept the CIA's domestic operations under the rug ("Identity Crisis").
Unhappy with the FBI's investigation into Reese, Snow paid another visit to Carter with Evans and warned her not to cooperate with Agent Donnelly. The next day, he received some information supposedly regarding Reese from a North Korean contact who had aided a CIA agent who escaped from China. Both he and Evans entered a hotel room, but were ambushed by Stanton, who had also survived the Ordos shelling. Evans was killed, while Snow was wounded as Stanton approached him, wanting to "catch-up" ("Matsya Nyaya").
Following the ambush, Snow has been held captive by Stanton, who locked him up in a storage room and forced him to wear a bomb vest to prevent him from escaping. From time to time, he would be released to "run errands" for Stanton, presumably to help her find out who had set her up in the Ordos mission. While at the morgue looking into the Alicia Corwin case, Snow accidentally ran into Carter. He told her that he had been reassigned and was no longer trying to catch Reese. When Carter tried to call Snow's CIA phone, she was redirected to another man who appeared to be looking for Snow, too ("Masquerade").
Snow killed a janitor named Dusan Babic on Stanton's orders, so that he could use his ID to get into Fujima Techtronics. As he had no other way of communicating with her, Snow left Carter's card in the dead man's pocket with Fujima Techtronics address on it. Carter followed Snow's clues to Fujima Techtronics and saw him leaving the building. Carter followed and confronted him. Snow revealed that he is rigged with an explosive vest and tells Carter to tell John that, "she is planning something big" ("Critical").
Snow is present when Reese wakes up in the back of a moving bus after being kidnapped and drugged at the end of "Prisoner's Dilemma" by Stanton. He informs Reese that they are wearing similar bomb vests and that Stanton holds the trigger to both. Kara then sends them to perform a few tasks for her.
After retrieving the drive that Mark earlier stole from Fujima Techtronics Stanton sends them to steal the car of two Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) agents. Snow nearly kills one of the agents but is stopped by Reese. Snow and Reese then take the place of the two agents and enter an office building housing a secret DOD facility on the 21st floor. While going up the elevator Stanton informs them that they would be facing two Delta Force operatives with M4s. The doors open and Reese and Snow take out the two men and tie them up. One of them hits Snow and he prepares to kill the man, but Reese stops him again. Kara warns that there may be signal interference so she arms the bombs on a 15-minute timer.
Reese and Snow then enter an electronics lab and with the help of Kevin, a technician, prepare to download a computer virus. Reese tells Snow that Stanton cannot hear them in the lab and suggests that they do something. However Snow says that they do not have enough time. Reese upon Kevin's advice starts removing the drives to trigger a security breach, Snow discovers what Reese is doing and attacks him. The two engage in a vicious fist fight but Reese holds Snow off. Just then Stanton arrives and tells them that she only needed them to clear the way and uploads a virus on the DOD servers. She then triggers a five-minute timer on the bomb vests, locks them in the lab and leaves.
With the help of Kevin, Reese and Snow manage to open the lab door and escape. Reese says that they have to get up to the roof so no one else is killed but Snow clubs him down. He says that the CIA has a safe house two blocks away. Reese warns him that the CIA will consider him compromised and all that is waiting for him is a black hood. Reese tries to get through to him but Snow simply wishes him luck and leaves. As Finch arrives and disarms Reese's bomb vest just in the nick of time, Kara returns to her car; however, to her surprise, she finds Snow in the back seat. Snow tells Stanton that she was right about him that he would be very good at dying. It is unclear what happens to Kara after that but the car explodes killing Snow and seemingly Kara. Up on the roof Reese realizes what Snow has done.
After Snow's death, the FBI informs Carter that they believe Snow to be "The Man in the Suit" that Donnelly was chasing and consider the case now closed ("Dead Reckoning").
The Machine puts a white square on Snow indicating that he does not know about its existence ("Super").
Snow does not know about the Machine. Strangely, Snow is not listed by the machine as a threat to an asset (Reese), even though he tries many times to kill him.
Tyrell Evans
Tyrell Evans (played by Darien Sills-Evans) is the alias of a CIA Operative, partnered with Mark Snow and assisted him in trying to retire Reese along with other assignments. He was shown to be an expert sniper and good with computers. He appeared many times with his partner, Snow ("Number Crunch").
He shoots Reese in the torso while his partner Mark Snow distracts Reese, almost killing him, but fails to find him afterwards as the lights are shot out by John right after ("Super").
When he and Snow receive a tip about a possible location on Reese at a hotel in the city. The two enter the room guns drawn. However, they are for some reason unable to correctly cover the room, and are shot by an assailant standing behind the door before they can return fire. The assailant turns out not to be Reese but Stanton ("Matsya Nyaya").
Kara Stanton
Kara Stanton (played by Annie Parisse) is a former CIA operative and John Reese's former handler and partner. Before joining the CIA, Kara graduated from the U.S. Naval Academy and is a former officer in the U.S. Marine Corps. Unlike John, Kara loved operating as an assassin and also was the love interest of John Reese. She was ruthless and her boss Mark Snow stated that Kara was a disturbing person " in a class all by herself ".
In 2006, Stanton met with Reese for the first time in Hungary to interrogate two men about the whereabouts of Alim Nazir. Prior to this meeting, Stanton was apparently informed by a reliable, anonymous source that the men were involved in getting Nazir out of the country. On this information, Stanton executed them in front of a shocked Reese, who expected them to be questioned before any action was taken. She also had a photo of Jessica Arndt talking to Reese at the airport. Upon showing this to Reese, she told him that he could not go back to her, and that he no longer had any old friends. Telling him to dispose of the body and gun, Stanton assigned him the cover name "Reese" ("Foe").
In 2007, in Prague, Reese and Stanton posed as a couple and shot three men down (one man selling plans on a drone to two Chinese nationals). Reese wanted to finish the mission as fast as possible but Stanton told him that he should learn to love his work as a killer ("Prisoner's Dilemma").
In 2008, Stanton, Reese, and Mark Snow were operating illegally in New York City, holding a government employee captive after he tried to sell some software to the Chinese. When Reese was given permission for time off, Stanton followed him to a bar and found him meeting with Jessica's husband Peter. She gave him a lecture, telling him that they're no longer like other people, and Reese reluctantly left with her before Jessica spotted them ("Blue Code").
In 2009, Stanton and Reese were assigned to kill a couple in Paris. They followed the couple to a bar. Stanton did not care why they have been ordered to kill them and advises Reese to act more credible since they were posing as a married couple. When the bar's other patron left, Stanton shot out the camera while Reese approached the couple.
Back in their apartment, after having killed the couple, they tried to remove all the traces that would ever prove they have been there. Stanton told him that they could take a break, but Reese said that he was fine. So Stanton drew out a gun on him and wanted him to choose between being a boy scout or a killer because she was tired of working with both. She also reminded him that he chose this life. Reese slammed her into the wall and told her that he loved his work and they kissed ("Prisoner's Dilemma").
In 2010, Reese and Stanton were in Morocco interrogating a suspect. A few hours later Mark Snow and Alicia Corwin arrived and informed them that they were being sent to China to retrieve a high-profile Stuxnet-like computer program from the Chinese. As Stanton left the room, Snow secretly ordered Reese to retire her, claiming she had been in contact with a terrorist.
The pair arrived in Ordos and discovered the site where the program was found. On arrival, they discovered the corpses of many software engineers, and much of the building's servers had been emptied. Reese found a survivor whom Stanton conversed with in Chinese. He said that men had turned up and took away the Machine. Upon hearing that, she promptly executed him and withheld what he had said from Reese. With much of what they came for already taken, Reese and Stanton had no choice but to wait for their extraction at nightfall.
When nightfall came, Stanton marked the landing zone with infrared glow sticks. Reese had readied his gun to shoot her in the back, but lowered it at the last moment, only for Stanton to turn and shoot him. She apologized, saying she had orders from Snow, and was told Reese had ties to terrorists. Reese laughed, telling her he had the same orders and that they were being set up, with the beacon actually signaling for a shelling, not extraction. Reese then made his escape, leaving Stanton standing in shock. Overhead, a CIA drone launched an incoming missile attack targeting the target structure. The wounded Reese managed to escape the blast radius, and turned back to see the explosion apparently killing his partner.
Stanton however, had managed to escape the explosion and escape China with the help of a dissident group ("Matsya Nyaya").
While recovering and becoming angry and disillusioned after operating as a CIA field operative, she was approached, successfully brainwashed and manipulated by a mysterious intelligence operator operating in the alias of "John Greer" who was aware of and tracking her while she was participating in those CIA field operations. ("Dead Reckoning").
Snow revealed to Detective Carter that over the course of their partnership, Reese and Stanton worked on numerous missions and often saved each other's lives. However, he lies and tells Carter Stanton was killed by Reese before he went off the grid ("Number Crunch").
Snow and his partner Evans were led to a hotel room after receiving some intelligence from one of their North Korean contacts. Upon their entry and subsequent search of the room, Stanton emerged from the shadows and ambushed them both, killing Evans and injuring Snow ("Matsya Nyaya").
Meaning to "catch up" with Snow, she kept him locked in a storage room with a bomb vest strapped onto him so that he would not escape. After he presented her proof that Alicia Corwin, the person who set her up in the mission to Ordos, was dead, she told him that she needed him to run a few more errands for her ("Masquerade").
Snow was seen by Carter, leaving Fujima Techtronics. Carter followed and confronted him. Snow revealed that he is rigged with an explosive vest and tells Carter to tell John that, "she is planning something big". Snow flees when a shooter fires upon them (presumably Stanton) interrupting their meeting ("Critical").
Nicholas Donnelly's number came up and Finch called him to warn him. At this point Stanton rams the vehicle that Donnelly is using to transport Reese and Carter to a safe-house. As the trio are recovering from the collision's impact, Stanton shoots Donnelly twice, before approaching Reese and asking if he missed her. She then plunges a syringe into his neck, presumably laden with some form of sedative as Reese goes limp after being injected ("Prisoner's Dilemma").
After rigging him with an explosive vest as well, she sends Reese and Snow out to run more errands for her. While fetching a hard drive, Stanton remotely instructs them to kill the sellers when they demand a higher price, but Reese refuses. Before the seller can press the matter further, Stanton who had been observing them, shoots the sellers with a sniper rifle from a rooftop.
She then orders Reese and Snow to steal the gear of two ATF agents who were about to be called in a bomb threat Stanton orchestrated in a nearby office building. The building houses a computer security installation and cyberwarfare development lab and by activating a fifteen-minute detonation timer, Stanton drives the team to clear the entry for her to access a Sensitive Compartmented Information Facility containing the weapons cache where she uploads the contents of the drive Reese and Snow retrieved earlier. Before she leaves, she triggers the bomb vests and locks the team in the room.
On the way back to her car, she calls Greer to inform him that the mission has been completed. He tells her the name of the man who sold the laptop (alleged to be Harold Finch), which was the cause for what happened in Ordos. Before she can act on the information, she is cornered by Snow, who escaped the building and hides in the backseat of her car, and then gets his revenge by seemingly killing her in his bomb's explosion. Her fate after this is unknown because she was never seen as actually in the car when it exploded and her corpse was never recovered. ("Dead Reckoning").
Decima Technologies
The following characters are involved in the Decima Technologies storyline, a shadowy organization that is in possession of the Samaritan AI:
John Greer
John Greer (played by John Nolan). Greer is a former British Army officer and MI6 agent who is trying to destroy the Machine with a rival A.I. In 2010, he sent one of three interested parties to Ordos, China to retrieve a laptop with stolen "Machine" code. The other teams were "The CIA" (John Reese and Kara Stanton) and the Chinese ("Ram" (2014), "Aletheia" (2014), "Zero Day" (2013), "Trojan Horse" (2013), and "Dead Reckoning" (2013)). He is very much present from the second half of season 3 to the end of season 5 as the main villain, getting Samaritan, a totalitarian artificial intelligence, online, to track down and kill any opponent, especially Finch, Reese, their friends, and the Machine. In the penultimate episode ".exe", Greer confronts Finch as he tries to destroy Samaritan with the Ice 9 computer virus. Greer sacrifices himself in an attempt to kill Finch by locking them in a sealed room and having Samaritan remove the oxygen. Greer dies from lack of oxygen, but Finch is rescued by the Machine with the help of Reese and Shaw who had been given Greer's number by the Machine.
Jeremy Lambert
Jeremy Lambert (played by Julian Ovenden) is an operative for Decima Technologies, and Greer's right-hand man. He first appears in (3.16 "RAM") posing as a government agent that wants to help Daniel Casey. Casey sees through his deception, forcing Lambert to attempt harsher methods to get at Casey's laptop. He is ultimately unsuccessful when the laptop falls into the hands of a Chinese intelligence agent.
(3.23 "Deus Ex Machina") Lambert and Decima rescue the hostages from Vigilance's mock trial. After Greer reveals the truth behind Vigilance, Lambert shoots and kills Peter Collier.
(4.10 "The Cold War") Lambert meets Root in a shadow zone and brokers a meeting between her and the analog interface of Samaritan, Gabriel Hayward.
(5.4 "6,741") Lambert is killed by Shaw in her escape from Samaritan, however it is later revealed that this was just one of thousands of simulations in Shaw's mind to get her to lead Samaritan to the Machine.
(5.7 "QSO") Lambert takes Shaw on a field trip to see a scientist whose research is threatening Samaritan's long term goals. Shaw kills the scientist, thinking she is in another simulation. However, it later becomes evident that maybe Shaw has lost her grip on reality and in fact really did kill an innocent person.
(5.8 "Reassortment") Shaw is finally able to escape her captors and finds herself within a South African prison. Right before getting to freedom, she is stopped by Lambert who tries to convince her that she is still in a simulation. Shaw mortally wounds him with a gunshot to the chest, telling him that he will be fine if this is only a simulation. As Lambert dies of his wound, Shaw takes his keys and drives his vehicle away from the facility.
Martine Rousseau
Martine Rousseau (played by Cara Buono) is the alias of a former investigator for the United Nations who is now a Samaritan operative.
Martine first appears in (4.1 "Panopticon") where she murders a journalist in Budapest who is attempting to reveal Samaritan to the public. Afterwards she serves as John Greer's lead assassin, hellbent on destroying the POI team.
(4.8 "Point of Origin") Martine is tasked by Greer to discover the identity of a woman (Sameen Shaw), who ISA operative Devon Grice allowed to escape a crime scene. She is eventually able to find Shaw's criminal friend Romeo, who she tortures for information. Romeo gives up Shaw and Martine tracks her to the department store that she works at for her cover identity.
(4.9 "The Devil You Know") When Shaw's cover is blown, Martine goes after her in a gunfight in a department store. Shaw is able to escape with the help of Root.
(4.11 "If-Then-Else") Martine leads a Samaritan task force to take out the POI team in the sub-levels of the Stock Exchange. Ultimately, Shaw intervenes and sacrifices herself to help the others escape and is last seen being shot multiple times at close range by Martine.
(4.19 "Search and Destroy") Martine intercepts the POI team when they discover a secret facility that Samaritan is using to apply Sulaiman Khan's anti-virus software to a worldwide search for the Machine. Root and Martine engage each other in hand-to-hand combat, but the POI team is overwhelmed and forced to retreat. Martine captures Khan and brings him to Greer.
(4.21 "Asylum") Root and Finch are captured while infiltrating a mental asylum that serves as Samaritan's New York base of operations. Martine confines Root to a bed, intent on brutally torturing her. While the Machine and Samaritan discuss the surrender of the Machine in exchange for the release of the POI agents, Root is able to get an advantage over Martine and snap her neck.
Gabriel Hayward
Gabriel Hayward (played by Oakes Fegley) is a young boy who acts as Samaritan's "analog interface".
Claire Mahoney
Claire Mahoney (played by Quinn Shephard) is a former college student who stole files from a private military contractor and who now has become obsessed with solving the Nautilus puzzle. Finch is able to discover that the Nautilus is actually a Samaritan recruiting tool, but he can't convince Claire to stop her quest. She makes it to the end of the puzzle where she is surrounded by hitmen hired by the PMC, but she is saved by an unseen marksman. She finds a hidden cell phone and Samaritan tells her that it will now protect her.
(4.15 "Q&A") Claire secretly contacts Finch, telling him that he was right about Samaritan. Finch rescues her from a sniper who wounds her and she attempts to give him a piece of Samaritan source code to study on his computer, but he remains suspicious. Eventually, she calls Harold by his name (which he never told her) and she is outed as an active Samaritan operative. She takes Harold at gunpoint to a Samaritan-controlled school and attempts to recruit him. He refuses and is taken away by other Samaritan operatives, but Root saves him and wounds Claire. Later, Claire takes Finch's laptop to Greer. When stating that she was nearly killed twice, she becomes concerned when Greer implies that she is expendable.
Zachary
Zachary (played by Robert Manning, Jr) an operative for Decima Technologies who later becomes a Samaritan agent. He is killed by John Reese in ".exe".
Jeff Blackwell
Jeff Blackwell (played by Joshua Close) is a recently paroled ex-con who later works for Samaritan.
(5.2 "SNAFU") Blackwell is originally a Person of Interest who is written off by Reese as a non-threat due to the Machine suffering malfunctions from its reboot. Later, he is recruited into Samaritan by Mona.
(5.5 "ShotSeeker") Blackwell is put back onto the POI team's radar when he is involved in the theft of research compiled by a missing graduate student at the behest of Samaritan.
(5.8 "Reassortment") Blackwell is tasked by Mona to infect two doctors in a quarantined hospital with a Samaritan-created virus, but the POI team stops him from completing his mission.
(5.10 "The Day The World Went Away") Blackwell is given the order by a superior to take out two high-valued targets (Finch and Root) from a sniper perch. He misses Finch, but mortally wounds Root, which eventually leads to her death.
(5.13 "return 0") Blackwell is part of a Samaritan team that assaults the Subway. He is captured by Fusco and Shaw, who with the help of the Machine deduce that he killed Root. He is able to stab Fusco with a hidden knife and escape. One week after the destruction of Samaritan, Shaw tracks him down and executes him as retribution for Root.
Mona
Mona (played by LaChanze) is a Samaritan operative who recruits Jeff Blackwell and later serves as his handler. She orders Blackwell to infiltrate a quarantined hospital and murder two doctors with a virulent flu strain created by Samaritan. Blackwell can't complete his mission, but Mona tells him that his actions ultimately helped achieve the desired outcome: Following the outbreak, numerous civilians willingly went to clinics for vaccinations allowing Samaritan to secretly collect their DNA.
Travers
Travers (played by Michael Potts) is a Samaritan agent who serves as a liaison with the government. (4.12 "Control-Alt-Delete") Travers is present in Research when Control demands to be privy to vital information on a suspected terrorist's laptop. He refuses and eventually turns off the Samaritan feeds when Control persists. He turns the feeds back on after Sen. Garrison gets Control to back down. (5.12 ".exe") Travers captures Finch in a Fort Meade server room, as Finch attempts to upload the ICE-9 computer virus. He then brings Finch to Greer.
Vigilance
The following characters are involved in the Vigilance story line, in which a violent organization professes to protect people's privacy from government intrusion.
Peter Collier
Peter Collier (played by Leslie Odom, Jr.) is the alias of Peter Brandt, the presumed leader of Vigilance. In 2010, Peter Brandt was an aspiring lawyer whose former addict brother Jesse was arrested by the federal government and held without legal counsel. Jesse eventually committed suicide while in custody, but Peter learned afterwards that Jesse was innocent. Infuriated, Peter got no answers or explanations from the government. Soon after, he was contacted by an anonymous source who played to his anger and ultimately recruited him into Vigilance where he took the alias of Peter Collier. Vigilance started off small by vandalizing surveillance equipment and spreading propaganda. After discovering an undercover federal agent in his ranks and summarily executing him, Collier was goaded by the anonymous source into taking more drastic measures.
Collier first appears posing as a member of a firm named Riverton, who is trying to get involved with Person of Interest Wayne Kruger's Lifetrace company. Kruger's own life unravels after his privacy and legal history are compromised by members involved in a class-action lawsuit against his company. Kruger unwittingly attempts to meet with Riverton to save the deal, unaware that Collier orchestrated his demise. Collier wounds Reese and executes Kruger before disappearing.
Collier and Vigilance attempt to abduct Arthur Claypool, the creator of rival ASI "Samaritan", but are foiled by Finch and Shaw. Vigilance eventually tracks Finch, Claypool and Shaw to a bank where the two drives containing the Samaritan source code are being held. Collier attempts to bomb his way into the vault to capture Claypool and Finch and destroy the drives, but is unsuccessful when Reese and Fusco show up to help.
After Vigilance kills OPR official Leona Wainwright, Finch and Fusco travel to her Washington, D.C. office to figure out why she was targeted. Finch breaks into her vault and discovers a government black ops budget report that mentions Operation Northern Lights. Collier and Vigilance show up to get the report and capture Finch, but he is rescued by Root and Fusco. Collier escapes with the report and disseminates it to the press, which causes Sen. Garrison to order Control to shut down Operation Northern Lights.
Collier and Vigilance use a computer virus to disable the power grid in New York City, so that they can capture Sen. Garrison, Control, National Security Adviser Manuel Rivera, John Greer and Finch. They take their hostages to an abandoned post office where they have set up a kangaroo court to try the hostages for their crimes against privacy. When Rivera get hostile during his testimony, he is shot and killed by Collier. After Garrison implicates Control as the main conspirator, Collier prepares to shoot her but Finch is able to stop him and admits his involvement in creating the Machine. Reese and Hersh team up to save their bosses and assault the post office, forcing Vigilance to take the hostages to a nearby rooftop for immediate execution. Decima agents led by Jeremy Lambert attack and kill everyone except Garrison, Control, Greer, Finch and Collier. After Garrison and Control are evacuated to safety, Greer tells Collier the brutal truth: Decima created Vigilance and subliminally radicalized them. When the power in the city is restored, a massive bomb in the post office explodes which kills Hersh, many Vigilance members and numerous civilians. The bombing is blamed on Vigilance, painting them as a domestic terrorist organization. Garrison falls for the ploy and uses the attack to quickly adopt Samaritan as the government's new mass surveillance program. Jeremy Lambert then executes Collier, but Finch escapes with the help of Reese.
Madison
Madison (played by Diane Davis) is the alias of a member of Vigilance, possibly second in command to Collier. She acts as the judge during Vigilance's "trial". Though initially able to escape the post office bombing, she is labelled as a threat when Samaritan goes online and is found and killed by operatives.
References
Sources
Person of Interest characters
Characters |
25523630 | https://en.wikipedia.org/wiki/Jargon%20File | Jargon File | The Jargon File is a glossary and usage dictionary of slang used by computer programmers. The original Jargon File was a collection of terms from technical cultures such as the MIT AI Lab, the Stanford AI Lab (SAIL) and others of the old ARPANET AI/LISP/PDP-10 communities, including Bolt, Beranek and Newman, Carnegie Mellon University, and Worcester Polytechnic Institute. It was published in paperback form in 1983 as The Hacker's Dictionary (edited by Guy Steele), revised in 1991 as The New Hacker's Dictionary (ed. Eric S. Raymond; third edition published 1996).
The concept of the file began with the Tech Model Railroad Club (TMRC) that came out of early TX-0 and PDP-1 hackers in the 1950s, where the term hacker emerged and the ethic, philosophies and some of the nomenclature emerged.
1975 to 1983
The Jargon File (referred to here as "Jargon-1" or "the File") was made by Raphael Finkel at Stanford in 1975. From that time until the plug was finally pulled on the SAIL computer in 1991, the File was named "AIWORD.RF[UP,DOC]" ("[UP,DOC]" was a system directory for "User Program DOCumentation" on the WAITS operating system). Some terms, such as frob, foo and mung are believed to date back to the early 1950s from the Tech Model Railroad Club at MIT and documented in the 1959 Dictionary of the TMRC Language compiled by Peter Samson. The revisions of Jargon-1 were all unnumbered and may be collectively considered "version 1". Note that it was always called "AIWORD" or "the Jargon file", never "the File"; the latter term was coined by Eric Raymond.
In 1976, Mark Crispin, having seen an announcement about the File on the SAIL computer, FTPed a copy of the File to the MIT AI Lab. He noticed that it was hardly restricted to "AI words" and so stored the file on his directory, named as "AI:MRC;SAIL JARGON" ("AI" lab computer, directory "MRC", file "SAIL JARGON").
Raphael Finkel dropped out of active participation shortly thereafter and Don Woods became the SAIL contact for the File (which was subsequently kept in duplicate at SAIL and MIT, with periodic resynchronizations).
The File expanded by fits and starts until 1983. Richard Stallman was prominent among the contributors, adding many MIT and ITS-related coinages. The Incompatible Timesharing System (ITS) was named to distinguish it from another early MIT computer operating system, Compatible Time-Sharing System (CTSS).
In 1981, a hacker named Charles Spurgeon got a large chunk of the File published in Stewart Brand's CoEvolution Quarterly (issue 29, pages 26–35) with illustrations by Phil Wadler and Guy Steele (including a couple of Steele's Crunchly cartoons). This appears to have been the File's first paper publication.
A late version of Jargon-1, expanded with commentary for the mass market, was edited by Guy Steele into a book published in 1983 as The Hacker's Dictionary (Harper & Row CN 1082, ). It included all of Steele's Crunchly cartoons. The other Jargon-1 editors (Raphael Finkel, Don Woods, and Mark Crispin) contributed to this revision, as did Stallman and Geoff Goodfellow. This book (now out of print) is hereafter referred to as "Steele-1983" and those six as the Steele-1983 coauthors.
1983 to 1990
Shortly after the publication of Steele-1983, the File effectively stopped growing and changing. Originally, this was due to a desire to freeze the file temporarily to ease the production of Steele-1983, but external conditions caused the "temporary" freeze to become permanent.
The AI Lab culture had been hit hard in the late 1970s by funding cuts and the resulting administrative decision to use vendor-supported hardware and associated proprietary software instead of homebrew whenever possible. At MIT, most AI work had turned to dedicated Lisp machines. At the same time, the commercialization of AI technology lured some of the AI Lab's best and brightest away to startups along the Route 128 strip in Massachusetts and out west in Silicon Valley. The startups built Lisp machines for MIT; the central MIT-AI computer became a TWENEX system rather than a host for the AI hackers' beloved ITS.
The Stanford AI Lab had effectively ceased to exist by 1980, although the SAIL computer continued as a computer science department resource until 1991. Stanford became a major TWENEX site, at one point operating more than a dozen TOPS-20 systems, but by the mid-1980s, most of the interesting software work was being done on the emerging BSD Unix standard.
In May 1983, the PDP-10-centered cultures that had nourished the File were dealt a death-blow by the cancellation of the Jupiter project at DEC. The File's compilers, already dispersed, moved on to other things. Steele-1983 was partly a monument to what its authors thought was a dying tradition; no one involved realized at the time just how wide its influence was to be.
As mentioned in some editions:
1990 and later
A new revision was begun in 1990, which contained nearly the entire text of a late version of Jargon-1 (a few obsolete PDP-10-related entries were dropped after consultation with the editors of Steele-1983). It merged in about 80% of the Steele-1983 text, omitting some framing material and a very few entries introduced in Steele-1983 that are now only of historical interest.
The new version cast a wider net than the old Jargon File; its aim was to cover not just AI or PDP-10 hacker culture but all of the technical computing cultures in which the true hacker-nature is manifested. More than half of the entries now derived from Usenet and represent jargon then current in the C and Unix communities, but special efforts were made to collect jargon from other cultures including IBM PC programmers, Amiga fans, Mac enthusiasts, and even the IBM mainframe world.
Eric Raymond maintained the new File with assistance from Guy Steele, and is the credited editor of the print version of it, The New Hacker's Dictionary (published by MIT Press in 1991); hereafter Raymond-1991. Some of the changes made under his watch were controversial; early critics accused Raymond of unfairly changing the file's focus to the Unix hacker culture instead of the older hacker cultures where the Jargon File originated. Raymond has responded by saying that the nature of hacking had changed and the Jargon File should report on hacker culture, and not attempt to enshrine it. After the second edition of NHD (MIT Press, 1993; hereafter Raymond-1993), Raymond was accused of adding terms reflecting his own politics and vocabulary, even though he says that entries to be added are checked to make sure that they are in live use, not "just the private coinage of one or two people".
The Raymond version was revised again, to include terminology from the nascent subculture of the public Internet and the World Wide Web, and published by MIT Press as The New Hacker's Dictionary, Third Edition, in 1996.
, no updates have been made to the official Jargon File since 2003. A volunteer editor produced two updates, reflecting later influences (mostly excoriated) from text messaging language, LOLspeak, and Internet slang in general; the last was produced in January 2012.
Impact and reception
Influence
Despite its tongue-in-cheek approach, multiple other style guides and similar works have cited The New Hacker's Dictionary as a reference, and even recommended following some of its "hackish" best practices. The Oxford English Dictionary has used the NHD as a source for computer-related neologisms. The Chicago Manual of Style, the leading American academic and book-publishing style guide, beginning with its 15th edition (2003) explicitly defers, for "computer writing", to the quotation punctuation style logical quotation recommended by the essay "Hacker Writing Style" in The New Hacker's Dictionary (and cites NHD for nothing else). The 16th edition (2010, and the current issue ) does likewise. The National Geographic Style Manual lists NHD among only 8 specialized dictionaries, out of 22 total sources, on which it is based. That manual is the house style of NGS publications, and has been available online for public browsing since 1995. The NGSM does not specify what, in particular, it drew from the NHD or any other source.
Aside from these guides and the Encyclopedia of New Media, the Jargon file, especially in print form, is frequently cited for both its definitions and its essays, by books and other works on hacker history, cyberpunk subculture, computer jargon and online style, and the rise of the Internet as a public medium, in works as diverse as the 20th edition of A Bibliography of Literary Theory, Criticism and Philology edited by José Ángel García Landa (2015); Wired Style: Principles of English Usage in the Digital Age by Constance Hale and Jessie Scanlon of Wired magazine (1999); Transhumanism: The History of a Dangerous Idea by David Livingstone (2015); Mark Dery's Flame Wars: The Discourse of Cyberculture (1994) and Escape Velocity: Cyberculture at the End of the Century (2007); Beyond Cyberpunk! A Do-it-yourself Guide to the Future by Gareth Branwyn and Peter Sugarman (1991); and numerous others.
Time magazine used The New Hacker's Dictionary (Raymond-1993) as the basis for an article about online culture in the November 1995 inaugural edition of the "Time Digital" department. NHD was cited by name on the front page of The Wall Street Journal. Upon the release of the second edition, Newsweek used it as a primary source, and quoted entries in a sidebar, for a major article on the Internet and its history. The MTV show This Week in Rock used excerpts from the Jargon File in its "CyberStuff" segments. Computing Reviews used one of the Jargon File's definitions on its December 1991 cover.
On October 23, 2003, The New Hacker's Dictionary was used in a legal case. SCO Group cited the 1996 edition definition of "FUD" (fear, uncertainty and doubt), which dwelt on questionable IBM business practices, in a legal filing in the civil lawsuit SCO Group, Inc. v. International Business Machines Corp.. (In response, Raymond added SCO to the entry in a revised copy of the Jargon File, feeling that SCO's own practices deserved similar criticism.)
Defense of the term hacker
The book is particularly noted for helping (or at least trying) to preserve the distinction between a hacker (a consummate programmer) and a cracker (a computer criminal); even though not reviewing the book in detail, both the London Review of Books and MIT Technology Review remarked on it in this regard. In a substantial entry on the work, the Encyclopedia of New Media by Steve Jones (2002) observed that this defense of the term hacker was a motivating factor for both Steele's and Raymond's print editions:
Reviews and reactions
PC Magazine in 1984, stated that The Hacker's Dictionary was superior to most other computer-humor books, and noted its authenticity to "hard-core programmers' conversations", especially slang from MIT and Stanford. Reviews quoted by the publisher include: William Safire of The New York Times referring to the Raymond-1991 NHD as a "sprightly lexicon" and recommending it as a nerdy gift that holiday season (this reappeared in his "On Language" column again in mid-October 1992); Hugh Kenner in Byte suggesting that it was so engaging that one's reading of it should be "severely timed if you hope to get any work done"; and Mondo 2000 describing it as "slippery, elastic fun with language", as well as "not only a useful guidebook to very much un-official technical terms and street tech slang, but also a de facto ethnography of the early years of the hacker culture". Positive reviews were also published in academic as well as computer-industry publications, including IEEE Spectrum, New Scientist, PC Magazine, PC World, Science, and (repeatedly) Wired.
US game designer Steve Jackson, writing for Boing Boing magazine in its pre-blog, print days, described NHD essay "A Portrait of J. Random Hacker" as "a wonderfully accurate pseudo-demographic description of the people who make up the hacker culture". He was nevertheless critical of Raymond's tendency to editorialize, even "flame", and of the Steele cartoons, which Jackson described as "sophomoric, and embarrassingly out of place beside the dry and sophisticated humor of the text". He wound down his review with some rhetorical questions:
The third print edition garnered additional coverage, in the usual places like Wired (August 1996), and even in mainstream venues, including People magazine (October 21, 1996).
References
Further reading
External links
(2004), Raymond's; mentions 4.4.8, but the available copy is 4.4.7
Archive (1981–2003); Steven Ehrbar's:
post-Raymond; last major revision
1991 non-fiction books
Books about computer hacking
Books by Eric S. Raymond
Books by Guy L. Steele Jr.
Computer books
Computer humor
Computer jargon
Computer programming folklore
Computer-related introductions in 1975
Creative Commons-licensed books
English dictionaries
Free software culture and documents
Software engineering folklore
Works about computer hacking |
651475 | https://en.wikipedia.org/wiki/Eurymachus | Eurymachus | The name Eurymachus (; Ancient Greek: Εὐρύμαχος Eurúmakhos) is attributed to the following individuals:
Mythology
Eurymachus, son of Hermes and father of Eriboea, mother of the Aloadae.
Eurymachus, a prince of the Phlegyes who attacked and destroyed Thebes after the death of Amphion and Zethus.
Eurymachus, the fourth suitor of Princess Hippodamia of Pisa, Elis. Like the other suitors of the latter, he was killed by the bride's father, King Oenomaus.
Eurymachus, son of Antenor and Theano. He was the brother of Crino, Acamas, Agenor, Antheus, Archelochus, Coön, Demoleon, Glaucus, Helicaon, Iphidamas, Laodamas, Laodocus, Medon, Polybus, and Thersilochus. Eurymachus was engaged to King Priam's daughter Polyxena.
Eurymachus, a fisherman from Syme, a small island between Caria and Rhodes, who came with their leader Nireus to fight against Troy. He was killed with a spear by Polydamas, the Trojan friend of Hector.
Eurymachus, an Achaean warrior who participated in the Trojan War. He was among those who hid inside the Wooden Horse.
Eurymachus, son of Polybus and one of the suitors of Penelope.
History
Eurymachus, one of the 180 Theban soldiers who were taken prisoner in the Theban siege of Plataea. All of the Theban soldiers were killed after the Plataeans brought everyone living outside of their walls into the city after unrequited negotiation with Thebes's nightly backup troops. Thucydides states that Eurymachus was "a man of great influence at Thebes," and that the Platean, Naucleides, arranged with him to bring in "a little over 300" Theban troops in the middle of the night, for a sneak attack. This event touched off the Peloponnesian War.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. . Online version at the Perseus Digital Library. Greek text available from the same website.
Homer. Odyssey. Trans. Stanley Lombardo. Canada: Hackett Publishing Company, Inc., 2000. Print.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library.
Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com
Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library.
Smith, William. (1870). Dictionary of Greek and Roman Biography and Mythology. London: Taylor, Walton, and Maberly.
Thucydides. History of the Peloponnesian War, Book II.
Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015.
Achaeans (Homer)
Trojans
People of the Trojan War
Characters in the Odyssey
Boeotian mythology
Thessalian characters in Greek mythology
Characters in Greek mythology
Ancient Greeks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.