id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
2422681 | https://en.wikipedia.org/wiki/UIMA | UIMA | UIMA ( ), short for Unstructured Information Management Architecture, is an OASIS standard for content analytics, originally developed at IBM. It provides a component software architecture for the development, discovery, composition, and deployment of multi-modal analytics for the analysis of unstructured information and integration with search technologies.
Structure
The UIMA architecture can be thought of in four dimensions:
It specifies component interfaces in an analytics pipeline.
It describes a set of Design patterns.
It suggests two data representations: an in-memory representation of annotations for high-performance analytics and an XML representation of annotations for integration with remote web services.
It suggests development roles allowing tools to be used by users with diverse skills.
Implementations and uses
Apache UIMA, a reference implementation of UIMA, is maintained by the Apache Software Foundation.
UIMA is used in a number of software projects:
IBM Research's Watson uses UIMA for analyzing unstructured data.
The Clinical Text Analysis and Knowledge Extraction System (Apache cTAKES) is a UIMA-based system for information extraction from medical records.
DKPro Core is a collection of reusable UIMA components for general-purpose natural language processing.
See also
Data Discovery and Query Builder
Entity extraction
General Architecture for Text Engineering (GATE)
IBM Omnifind
Languageware
References
External links
Apache UIMA home page
Apache Software Foundation projects
Software architecture
Data mining and machine learning software |
1497050 | https://en.wikipedia.org/wiki/X-13ARIMA-SEATS | X-13ARIMA-SEATS | X-13ARIMA-SEATS, successor to X-12-ARIMA and X-11, is a set of statistical methods for seasonal adjustment and other descriptive analysis of time series data that are implemented in the U.S. Census Bureau's software package. These methods are or have been used by Statistics Canada, Australian Bureau of Statistics, and the statistical offices of many other countries.
X-12-ARIMA can be used together with many statistical packages, such as SAS in its econometric and time series (ETS) package, R in its (seasonal) package, Gretl or EViews which provides a graphical user interface for X-12-ARIMA, and NumXL which avails X-12-ARIMA functionality in Microsoft Excel. There is also a version for Matlab.
Notable statistical agencies presently using X-12-ARIMA for seasonal adjustment include Statistics Canada, the U.S. Bureau of Labor Statistics and Census and Statistics Department (Hong Kong). The Brazilian Institute of Geography and Statistics uses X-13-ARIMA.
X-12-ARIMA was the successor to X-11-ARIMA; the current version is X-13ARIMA-SEATS.
X-13-ARIMA-SEATS's source code can be found on the Census Bureau's website.
Methods
The default method for seasonal adjustment is based on the X-11 algorithm. It is assumed that the observations in a time series, , can be decomposed additively,
or multiplicatively,
In this decomposition, is the trend (or the "trend cycle" because it also includes cyclical movements such as business cycles) component, is the seasonal component, and is the irregular (or random) component. The goal is to estimate each of the three components and then remove the seasonal component from the time series, producing a seasonally adjusted time series.
The decomposition is accomplished through the iterative application of centered moving averages. For an additive decomposition of a monthly time series, for example, the algorithm follows the following pattern:
An initial estimate of the trend is obtained by calculating centered moving averages for 13 observations (from to ).
Subtract the initial estimate of the trend series from the original series, leaving the seasonal and irregular components (SI).
Calculate an initial estimate of the seasonal component using a centered moving average of the SI series at seasonal frequencies, such as
Calculate an initial seasonally adjusted series by subtracting the initial seasonal component from the original series.
Calculate another estimate of the trend using a different set of weights (known as "Henderson weights").
Remove the trend again and calculate another estimate of the seasonal factor.
Seasonally adjust the series again with the new seasonal factors.
Calculate the final trend and irregular components from the seasonally adjusted series.
The method also includes a number of tests, diagnostics and other statistics for evaluating the quality of the seasonal adjustments.
Copyright and conditions
The software is US government work, and those are in the public domain (in the US); for this software copyright has also been granted for other countries; the "User agrees to make a good faith effort to use the Software in a way that does not cause damage, harm, or embarrassment to the United States/Commerce."
See also
ARIMA
CSPro
Seasonality
References
External links
X-13ARIMA-SEATS Seasonal Adjustment Program documentation on website of the US Census Bureau
Free econometrics software
Free statistical software
Public-domain software with source code
Science software for Windows
Science software for Linux
United States Census Bureau
Time series software |
921577 | https://en.wikipedia.org/wiki/Blue%20Sky%20Studios | Blue Sky Studios | Blue Sky Studios, Inc. was an American computer animation film studio based in Greenwich, Connecticut. It was founded in 1987 by Chris Wedge, Michael Ferraro, Carl Ludwig, Alison Brown, David Brown, and Eugene Troubetzkoy after their employer, MAGI, one of the visual effects studios behind Tron, shut down. Using its in-house rendering software, the studio created visual effects for commercials and films before dedicating itself to animated film production. Its first feature, Ice Age, was released in 2002 by 20th Century Fox. It produced 13 feature films, the final one being Spies in Disguise, released on December 25, 2019.
Blue Sky Studios was a subsidiary of 20th Century Animation until its acquisition by Disney, as part of their acquisition of 21st Century Fox assets in 2019. In February 2021, Disney announced that Blue Sky would be shut down in April 2021 citing the economic impact of the COVID-19 pandemic on its business operations. The studio ceased all operations on April 10, 2021.
Ice Age and Rio were the studio's most commercially successful franchises, while Robots, Horton Hears a Who!, The Peanuts Movie, and Spies in Disguise were among its most critically praised films. Scrat, a character from Ice Age, was the studio's mascot.
History
1980–1989: Formation and early computer animation
In the late 1970s, Chris Wedge, then an undergraduate at Purchase College studying film, was employed by Mathematical Applications Group, Inc. (MAGI). MAGI was an early computer technology company which produced SynthaVision, a software application that could replicate the laws of physics to measure nuclear radiation rays for U.S. government contracts. At MAGI, Wedge met Eugene Troubetzkoy, who held a Ph.D in theoretical physics and was one of the first computer animators. Using his background in character animation, Wedge helped MAGI produce animation for television commercials, which eventually led to an offer from Walt Disney Productions to produce animation for the film Tron (1982). After Tron, MAGI hired Carl Ludwig, an electrical engineer, and Mike Ferraro transferred to the film division from the Cad Cam division of MAGI. As MAGI's success began to decline, the company employed David Brown from CBS/Fox Video to be a marketing executive and Alison Brown to be a managing producer. After MAGI was sold to Vidmax (Canada), the six individuals—Wedge, Troubetzkoy, Ferraro, Ludwig, David Brown, and Alison Brown—founded Blue Sky Studios in February 1987 to continue the software design and produce computer animation.
At Blue Sky, Ferraro and Ludwig expanded on CGI Studio, the studio programming language they started at MAGI and began using it for animation production. At the time, scanline renderers were prevalent in the computer graphics industry, and they required computer animators and digital artists to add lighting effects in manually; Troubetzkoy and Ludwig adapted MAGI's ray tracing, algorithms which simulate the physical properties of light in order to produce lighting effects automatically. To accomplish this, Ludwig examined how light passes through water, ice, and crystal, and programmed those properties into the software. Following the stock market crash of 1987, Blue Sky Studios did not find their first client until about two years later: a company "that wanted their logo animated so it would be seen flying over the ocean in front of a sunset." In order to receive the commission, Blue Sky spent two days rendering a single frame and submitted it to the prospective client. However, once the client accepted their offer, Blue Sky found that they could not produce the entire animation in time without help from a local graphics studio, which provided them with extra computer processors.
1989–2002: Television commercials, visual effects and Bunny
Throughout the late 1980s and 1990s, Blue Sky Studios concentrated on the production of television commercials and visual effects for film. The studio began by animating commercials that depicted the mechanisms of time-release capsules for pharmaceutical corporations. The studio also produced a Chock Full O' Nuts commercial with a talking coffee bean and developed the first computer-animated M&M's. Using CGI Studio, the studio produced over 200 other commercials for clients such as Chrysler, General Foods, Texaco, and the United States Marines. They made a cartoon bumper for Nicktoons that features an orange blob making a dolphin, a dinosaur, and a walking person.
In 1996, MTV collaborated with Blue Sky Studios on the film Joe's Apartment, for which Blue Sky animated the insect characters. Other clients included Bell Atlantic, Rayovac, Gillette and Braun. The Braun commercial was awarded a CLIO Award for Advertising. Recalling the award, Carl Ludwig stated that the judges had initially mistaken the commercial as a live action submission as a result of the photorealism of the computer-animated razor. In August 1997, 20th Century Fox's Los Angeles-based visual effects company, VIFX, acquired majority interest in Blue Sky Studios to form a new visual effects and animation company, temporarily renamed "Blue Sky/VIFX". Following the studio's expansion, Blue Sky produced character animation for the films Alien Resurrection (1997), A Simple Wish (1997), Mouse Hunt (1997), Star Trek: Insurrection (1998) and Fight Club (1999).
Meanwhile, starting in 1990, Chris Wedge had been working on a short film named Bunny, intended to demonstrate CGI Studio. The film revolves around a rabbit widow who is irritated by a moth. The moth subsequently leads the rabbit into "a heavenly glow, reuniting her with her husband." At the time, Wedge had been the thesis advisor for Carlos Saldanha while Saldanha was a graduate student at the School of Visual Arts; Wedge shared storyboard panels for Bunny with Saldanha during this time. After Saldanha's graduation, Blue Sky Studios hired him as an animator, and he later directed a few commercials. It was not until 1996 when Nina Rappaport, a producer at Blue Sky Studios, assigned Wedge to complete the Bunny project, which required CGI Studio to render fur, glass, and metal from multiple light sources, such as a swinging light bulb and an "ethereal cloudscape". In the initial stages of the Bunny project, Carl Ludwig modified CGI Studio to simulate radiosity, which tracks light rays as they reflect off of multiple surfaces. Blue Sky Studios released Bunny in 1998, and it received the Academy Award for Best Animated Short Film. Bunny's success gave Blue Sky Studios the opportunity to produce feature-length films.
2002–2018: Feature films under 20th Century Fox
In March 1999, Fox decided to sell VIFX to another visual effects house, Rhythm & Hues Studios, while Blue Sky Studios would remain under Fox. According to Chris Wedge, Fox considered selling Blue Sky as well by 2000 due to financial difficulties in the visual effects industry in general. Instead, Wedge, film producer Lori Forte, and animation executive Chris Meledandri presented Fox with a script for a comedy feature film titled Ice Age. Studio management pressured staff to sell their remaining shares and options to Fox on the promise of continued employment on feature-length films. The studio moved to White Plains NY and started production on Ice Age. As the film wrapped, Fox feared that it might bomb at the box office. They terminated half of the production staff and tried unsuccessfully to find a buyer for the film and the studio. Instead, Ice Age was released by 20th Century Fox on March 15, 2002, and was a critical and commercial success, receiving a nomination for an Academy Award for Best Animated Feature at the 75th Academy Awards in 2003. The film established Blue Sky as the third studio, after Pixar and DreamWorks Animation, to launch a successful CGI franchise.
In January 2009, the studio moved from White Plains, New York to Greenwich, Connecticut, taking advantage of the state's 30 percent tax credit and having more space to grow. The studio stated in April 2017 that it intended to stay in Connecticut until 2025.
In 2013, Chris Wedge took a leave of absence to direct Paramount Animation's live-action/computer-animated film Monster Trucks. He then returned to Blue Sky Studios and worked on multiple projects for the company, such as serving as an executive producer.
2019–2021: Disney acquisition and closure
Ownership of Blue Sky Studios was assumed by The Walt Disney Company as part of their 2019 acquisition of 21st Century Fox, which concluded on March 20, 2019. On March 21, Disney announced that Blue Sky Studios and its parent company 20th Century Fox Animation (now 20th Century Animation) would be integrated as units within the Walt Disney Studios with co-presidents Andrea Miloro and Robert Baird continuing to lead the studio, while reporting to Walt Disney Studios chairman Alan Horn. In July 2019, Miloro announced that she would be stepping down from her role as co-president, thus leaving Baird as sole president.
In August 2019, former Walt Disney Animation Studios head Andrew Millstein was named as co-president of Blue Sky Studios alongside Baird, while Pixar Animation Studios president Jim Morris would also be taking a supervising role.
On February 9, 2021, Disney announced that it was closing Blue Sky Studios in April 2021. The company explained that in light of the ongoing COVID-19 pandemic's continued economic impact on all of its businesses, it was no longer sustainable for them to run a third feature animation studio. In addition, production on a film adaptation of the webcomic Nimona, originally scheduled to be released on January 14, 2022, was cancelled as a result of its closure. The studio's film library and intellectual properties are retained by Disney. Although Disney did not give an exact date as to when the studio would be closing down initially, former animator Rick Fournier confirmed on April 10 it was their last day of operation, three days after co-founder Chris Wedge released a farewell letter on social media.
As of June 19, 2021, Blue Sky Studios' website now redirects to Disney.com.
Filmography
Feature films
Television specials
Short films
Contributions
Nickelodeon (1991) – Blob ident
Joe's Apartment (1996) – dancing and singing cockroaches
Alien Resurrection (1997) – the aliens
A Simple Wish (1997) – numerous characters and special effects
Mouse Hunt (1997) – several mice and household effects
Star Trek: Insurrection (1998) – several alien creatures
Jesus' Son (1999) – sacred heart, "liquid" glass, and screaming cotton ball effects
Fight Club (1999) – the "sliding" penguin
The Sopranos (2000) – the "talking fish" in the episode "Funhouse"
Titan A.E. (2000) – 3D animation: creation of the new world in the final "Genesis" sequence
Family Guy (2006) – Scrat's cameo in the episode "Sibling Rivalry"
Franchises
See also
20th Century Animation
Fox Animation Studios
Rhythm and Hues Studios
Pixar
Walt Disney Animation Studios
Disneytoon Studios
List of 20th Century Studios theatrical animated feature films
List of Disney theatrical animated feature films
Notes
References
Further reading
External links
1987 establishments in Connecticut
20th Century Studios
2021 disestablishments in Connecticut
American companies established in 1987
American companies disestablished in 2021
American animation studios
Defunct film and television production companies of the United States
Disney acquisitions
Disney production studios
Film production companies of the United States
Fox animation
The Walt Disney Studios
Visual effects companies
Companies based in Fairfield County, Connecticut
Mass media companies established in 1987
Mass media companies disestablished in 2021
Former News Corporation subsidiaries
Companies disestablished due to the COVID-19 pandemic
Impact of the COVID-19 pandemic on cinema |
1962219 | https://en.wikipedia.org/wiki/Hocus%20Pocus%20%281993%20film%29 | Hocus Pocus (1993 film) | Hocus Pocus is a 1993 American fantasy comedy horror film directed by Kenny Ortega and written by Neil Cuthbert and Mick Garris. The film follows a villainous comedic trio of witches (Bette Midler, Sarah Jessica Parker, and Kathy Najimy) who are inadvertently resurrected by a virgin teenage boy (Omri Katz) in Salem, Massachusetts, on Halloween night.
The film was released in the United States on July 16, 1993, by Walt Disney Pictures. Upon its release, it received mixed reviews from film critics and was a box office failure, possibly losing Disney around $16.5 million during its theatrical run. However, largely through many annual airings on Disney Channel and Freeform (formerly ABC Family) all throughout the month of October annually, Hocus Pocus has been rediscovered by audiences, resulting in a yearly spike in home video sales of the film every Halloween season. The annual celebration of Halloween has helped make the film a cult classic.
A sequel is in production, written by Jen D'Angelo, directed by Anne Fletcher and set for a 2022 release as a Disney+ original film.
Plot
On October 31, 1693, in Salem, Massachusetts, Thackery (Binx) witnesses his little sister, Emily, being whisked away to the woods by the Sanderson sisters, three witches named Winifred, Sarah, and Mary. At their cottage, the witches feed Emily a potion which allows them to absorb her life force and regain their youth, killing her in the process. Binx confronts the witches, but is transformed into a black cat, cursed to live forever with his guilt for not saving Emily. After discovering the children missing, the townsfolk arrest the sisters and sentence them to be hanged for the murder of Binx and Emily. Before their execution, Winifred casts a curse that will resurrect the sisters during a full moon on All Hallows' Eve if a virgin lights the Black Flame Candle in their cottage. Binx decides to spend his life close by, to ensure no one ever summons the witches.
Three centuries later, on October 31, 1993, Max Dennison is feeling unsettled from his family's sudden move from Los Angeles, California, to Salem. On Halloween, Max takes his younger sister Dani out trick-or-treating, where they meet Max's new crush Allison. In an effort to impress Allison, Max invites her to show him the Sanderson house to convince him the witches were real.
Inside the Sanderson cottage, now a former museum, Max lights the Black Flame Candle and inadvertently resurrects the witches due to his virginity. The witches attempt to suck the soul of Dani, but Max comes to her rescue. Escaping, Max steals Winifred's spellbook on advice from Binx. He takes the group to an old cemetery where they will be protected from the witches due to it being hallowed ground.
The witches begin their search for the spellbook, but are horrified when they discover that Halloween has become a festival of disguises. They pursue the children across town using Mary's enhanced sense of smell. Winifred reveals that the spell that brought them back only works on Halloween and unless they can suck the life out of at least one child, they will turn to dust when the sun rises. After luring them to the highschool, the children trap the witches in a pottery kiln and burn them alive. While the children are celebrating, the witches' curse revives them again.
Not realizing the witches have survived, Max and Allison open the spellbook, hoping to reverse the spell on Binx. The open spellbook reveals the location of the group, and the witches track them down, kidnap Dani and Binx, and recover the spellbook. Sarah uses her siren-like song to entice Salem's children, luring them to the Sanderson cottage. Max and Allison free Dani and Binx by tricking the witches into believing that sunrise was an hour early. Thinking that they are done for, the witches panic and pass out, allowing Max, Dani, Allison, and Binx to escape.
Back at the cemetery, the witches attack from the air and snatch Dani. Winifred attempts to use the last vial of potion to suck the soul from Dani. Binx knocks the potion out of her hand which Max catches and promptly drinks, forcing the witches to take him instead of Dani. The sun starts to rise just as Winifred is about to finish draining Max's life force. In the ensuing struggle, Allison and Dani fend off Mary and Sarah. Max and Winifred, struggling in the air, fall onto the hallowed ground in the cemetery, causing Winifred to turn into stone. As the sun finishes rising above the horizon, Mary and Sarah are disintegrated into dust along with Winifred's stone body.
The witches' deaths break Binx's curse, allowing him to finally die and freeing his soul. Appearing as a spirit, Binx thanks the group for their help and bids farewell to them as he is reunited with the spirit of his sister.
Cast
Bette Midler as Winifred "Winnie" Sanderson, the smart leader of the sisters
Sarah Jessica Parker as Sarah Sanderson, the dim witted sister who uses her siren-like voice to lure in children
Kathy Najimy as Mary Sanderson, the middle witch who can smell out children
Omri Katz as Max Dennison, teenager from Los Angeles, California who recently moved to Salem, Massachusetts
Thora Birch as Dani Dennison, Max's 8-year-old sister who loves Halloween
Vinessa Shaw as Allison, Max's crush and classmate
Charles Rocket as Dave Dennison, father of Max and Dani
Stephanie Faracy as Jenny Dennison, mother of Max and Dani
Larry Bagby as Ernie/Ice, teenage bully in 20th-century Salem
Tobias Jelinek as Jay, teenage bully in 20th-century Salem
Sean Murray as Thackery Binx, teenage boy from 1693 cursed to live as an immortal black cat
Jason Marsden as voice of Thackery Binx (cat form)
Doug Jones as William "Billy" Butcherson, an ex-boyfriend poisoned by Winifred in 1693 and resurrected as a zombie 300 years later.
Amanda Shepherd as Emily Binx, Thackery's little sister and 1693 victim of the Sanderson sisters
Kathleen Freeman as Miss Olin, teacher at Jacob Bailey High School
Steve Voboril as Elijah, friend of Thackery in 1693
Norbert Weisser as Mr. Binx, father of Thackery and Emily in 1693
Garry Marshall (uncredited) as Master Devil, unsuspecting homeowner in 20th-century Salem
Penny Marshall (uncredited) as Medusa Lady, unsuspecting homeowner in 20th-century Salem
Production
Development
In the 1994 TV documentary Hocus Pocus: Begin the Magic, and on the film's Blu-ray release, producer David Kirschner explains how he came up with the idea for the film one night. He and his young daughter were sitting outside and his neighbor's black cat strayed by. Kirschner invented a tale of how the cat was once a boy who was changed into a feline three hundred years ago by three witches.
Hocus Pocus started life as a script by Mick Garris, that was bought by Walt Disney Pictures in 1984. The film's working title was Disney's Halloween House, it was much darker and scarier, and its protagonists were all 12-year-olds. Garris and Kirschner pitched it to Steven Spielberg's Amblin Entertainment; Spielberg saw Disney as a competitor to Amblin in the family film market at the time and refused to co-produce a film with his "rival."
Writing
Various rewrites were made to the script to make the film more comedic and made two of its young protagonists into teenagers; however, production was stalled several times until 1992, when Bette Midler expressed interest in the script and the project immediately went forward. Midler, who plays the central antagonist of the film (originally written for Cloris Leachman), is quoted as saying that Hocus Pocus "was the most fun I'd had in my career up to that point".
Casting
Leonardo DiCaprio was originally offered the lead role of Max but declined it to pursue What's Eating Gilbert Grape.
Filming
Principal photography began on October 12, 1992. The film is set in Salem, Massachusetts, but most of it was shot on sound stages in Burbank, California. However, its daytime scenes were filmed in Salem and Marblehead, Massachusetts during two weeks of filming with principal cast. Production was completed on February 10, 1993.
Pioneer Village, a recreation of early-colonial Salem, was used for the opening scenes set in 1693. Other locations included Old Burial Hill in Marblehead, where Max is accosted by Ice and Jay, the Old Town Hall in Salem, where the town Halloween party takes place, and Phillips Elementary School, where the witches are trapped in a kiln. The exterior for Max and Dani's house is a private residence on Ocean Avenue in Salem.
Release
The film was released in the United States on July 16, 1993, by Walt Disney Pictures. The film received mixed reviews from film critics at the time of its release. It was not a critical or commercial success upon its release, possibly losing Disney around $16.5 million during its theatrical run. However, largely through many annual airings on Disney Channel and Freeform (formerly ABC Family) all throughout the month of October annually, Hocus Pocus has been rediscovered by audiences, resulting in a yearly spike in home video sales of the film every Halloween season. The annual celebration of Halloween has helped make this film a cult classic.
Home video
The film was released to VHS in North America on January 5, 1994 and later to DVD on June 4, 2002. Following the film's release on DVD, it has continued to show strong annual sales, raking in more than $1 million in DVD sales each October. In the mid-to-late 1990s, the film was rebroadcast annually on ABC and Disney Channel before switching over to ABC Family's 13 Nights of Halloween lineup in the early 2000s. The film has continuously brought record viewing numbers to the lineup, including a 2009 broadcast watched by 2.5 million viewers. In 2011, an October 29 airing became the lineup's most watched program, with 2.8 million viewers. On September 4, 2012, the film was released on Blu-ray. Disney re-released the film on Blu-ray and Digital HD on September 2, 2018, as part of the film's 25th anniversary. The new release contains special features, including deleted scenes and a behind-the-scenes retrospective. On 15 September 2020, the film was released on Ultra HD Blu-ray in 4K resolution with HDR.
Music
The musical score for Hocus Pocus was composed and conducted by John Debney. James Horner was originally slated to score the film, but became unavailable at the last minute, so Debney had to score the entire film in two weeks. Even though he didn't score the film, Horner came back to write the theme for Sarah (sung by Sarah Jessica Parker, more commonly known as "Come Little Children") which is featured in Intrada's Complete Edition of the score.
Debney released a promotional score through the internet containing 19 tracks from the film. Bootlegs were subsequently released across the internet, primarily because the promotional release missed the entire opening sequence music.
Songs
"Sarah's Theme" – music by James Horner; lyrics by Brock Walsh; performed by Sarah Jessica Parker
"I Put a Spell on You" – written by Jay Hawkins and produced and arranged by Marc Shaiman; performed by Bette Midler
"Witchcraft" – written by Cy Coleman, Carolyn Leigh; performed by Joe Malone
"I Put a Spell on You" – written by Jay Hawkins; performed by Joe Malone
"Sabre Dance" – written by Aram Khachaturian, arranged by George Wilson
Chants and Incantations – conceived and written by Brock Walsh
Reception
Box office
Hocus Pocus was released July 16, 1993, and came in fourth place on its opening weekend, grossing $8.1 million. It dropped from the top ten ranking after two weeks of release. The film was released the same day as Free Willy. According to Kirschner, Disney chose to release Hocus Pocus in July to take advantage of children being off from school for the summer.
In October 2020, amid the COVID-19 pandemic, Hocus Pocus was re-released in 2,570 theaters. It made $1.9 million over the weekend, finishing second behind Tenet. The following two weekends it made $1.2 million and $756,000, respectively.
Critical response
On review aggregator Rotten Tomatoes, the film has an approval rating of 38% based on 56 reviews, with an average rating of 4.9/10. The website's critical consensus reads, "Harmlessly hokey yet never much more than mediocre, Hocus Pocus is a muddled family-friendly effort that fails to live up to the talents of its impressive cast." Metacritic assigned the film a weighted average score of 47 out of 100, based on 23 critics, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale.
Gene Siskel, reviewing for The Chicago Tribune, remarked that the film was a "dreadful witches' comedy with the only tolerable moment coming when Bette Midler presents a single song." Roger Ebert in The Chicago Sun-Times gave the film one star out of a possible four, writing that it was "a confusing cauldron in which there is great activity but little progress, and a lot of hysterical shrieking". The Miami Herald called it "a pretty lackluster affair", adding this comment: "Despite the triple-threat actress combo, Hocus Pocus won't be the Sister Act of 1993. There are a lot of gotta-sees this summer, and this isn't one of them."
Janet Maslin of The New York Times wrote that the film "has flashes of visual stylishness but virtually no grip on its story". Ty Burr of Entertainment Weekly gave the film a C-, calling it "acceptable scary-silly kid fodder that adults will find only mildly insulting. Unless they're Bette Midler fans. In which case it's depressing as hell"; and stating that while Najimy and Parker "have their moments of ramshackle comic inspiration, and the passable special effects should keep younger campers transfixed [...] [T]he sight of the Divine Miss M. mugging her way through a cheesy supernatural kiddie comedy is, to say the least, dispiriting."
Legacy
Over the years, through various outlets such as strong DVD sales and annual record-breaking showings on Freeform's 31 Nights of Halloween, the film has achieved cult status. Various media outlets such as Celebuzz and Oh No They Didn't have reiterated such claims. In its 25th anniversary year in 2018, the first week of Hocus Pocus viewings on Freeform averaged 8.2 million viewers. A special called the "Hocus Pocus 25th Anniversary Halloween Bash" was filmed at the Hollywood Forever Cemetery and features interviews with members of the cast, including Bette Midler, Sarah Jessica Parker, and Kathy Najimy, as well as a costume contest hosted by Sharon and Kelly Osbourne. It aired on Freeform October 20, 2018.
In October 2011, the Houston Symphony celebrated various horror and Halloween classics, including Hocus Pocus, with "The Hocus Pocus Pops". On October 19, 2013, D23 held a special screening of Hocus Pocus at the Walt Disney Studios in Burbank, California, to honor the 20th anniversary of the film. Nine of the cast and crew gathered for the screening, and hundreds of D23 members attended. Returning members included Kathy Najimy, David Kirschner, Thora Birch, Doug Jones, Vinessa Shaw, and Omri Katz.
During her Divine Intervention Tour in 2015, Bette Midler appeared on stage dressed as Winifred Sanderson. Her Harlettes appeared with her dressed as Mary and Sarah, and the three of them performed the film's version of "I Put a Spell on You".
On September 15, 2015, the Hocus Pocus Villain Spelltacular was introduced at the Magic Kingdom as a part of Mickey's Not-So-Scary Halloween Party. The show introduces new actresses as the Sanderson Sisters, who try to make a villain party and summon or attract various Disney villains in the process. In September 2016, entertainment critic Aaron Wallace published Hocus Pocus in Focus: The Thinking Fan's Guide to Disney's Halloween Classic, the first full-length book written about the film. The book includes a foreword by Thora Birch and afterword by Mick Garris. Billed as a "lighthearted but scholarly look at the film," the book analyzes the film's major themes, which it identifies as festivity, nostalgia, home, horror, virginity, feminism, Broadway-style musical moments, sibling rivalry, "Spielbergian" filmmaking style, Disney villain traditions, and more. Wallace also analyzes Walt Disney World's Hocus Pocus Villain Spelltacular as part of the film's legacy and includes "the largest collection of Hocus Pocus fun facts and trivia ever assembled," complete with extensive endnote citations.
The City of Salem has celebrated its connection to Hocus Pocus, while local filming sites have become an attraction for fans as the film's legacy has grown over the years. In 2018, the Haunted Happenings Grand Parade, an annual Salem festival held every October, was Hocus Pocus-themed in honor of the film's 25th anniversary. A representative for Destination Salem also reported a huge uptick in tourism for the 25th anniversary year, stating: "There's always been a ‘Hocus Pocus’ component to the visitors to Salem, especially in October. But it's like the film's following grows every year.”
The cast reunited for In Search of the Sanderson Sisters: A Hocus Pocus Hulaween Takeover which aired on October 30, 2020. The one-hour broadcast was virtual due to the COVID-19 pandemic, and the proceeds will go to the New York Restoration Project. Members of the cast who participated were Bette Midler, Sarah Jessica Parker, Kathy Najimy, Thora Birch, Omri Katz, Vinessa Shaw, and Doug Jones. Other notable participants of the benefit included Meryl Streep, Mariah Carey, Cassandra Peterson, Glenn Close, Billy Crystal, Jamie Lee Curtis, Todrick Hall, Jennifer Hudson, Anjelah Johnson-Reyes, Michael Kors, Adam Lambert, George Lopez, Alex Moffat, Martin Short, Sarah Silverman, John Stamos, Kenan Thompson, Sophie von Haselberg, and Bella Hadid.
Sequel
In July 2014, it was announced that Disney was developing a supernatural-themed film about witches, and that Tina Fey was on board as a producer and star. However, Deadline debunked rumors that the film was a sequel to Hocus Pocus. In November 2014, Bette Midler said in an interview that she was ready and willing to return for a sequel. She also said her co-stars Sarah Jessica Parker and Kathy Najimy were interested in reprising the roles of the Sanderson sisters as well, but stressed that Disney had yet to greenlight any sequel. In November 2015, Midler stated in a Facebook Q&A that "after all these years and all the fan demand, I do believe I can stand and firmly say an unequivocal no" in response to a question about a sequel.
In June 2016, actor Doug Jones mentioned that Disney had been considering a sequel, and behind the scenes discussions were in place to possibly continue the series. In October 2016, Sarah Jessica Parker was asked by Andy Cohen about a sequel. Her response was, "I would love that. I think we've been very vocal that we're very keen." In Hocus Pocus in Focus: The Thinking Fan's Guide to Disney's Halloween Classic, author Aaron Wallace identifies several potential approaches for a sequel, but notes that the project's biggest challenge is the Walt Disney Studios' interest in tentpole projects that promise very high box office returns.
In September 2017, screenwriter Mick Garris admitted that he was working on a script for Hocus Pocus 2 and that it would potentially be developed as a television film for Disney Channel, Freeform or ABC. It was later confirmed that it will instead be a remake to air on Freeform, with The Royals writer Scarlett Lacey attached to write, and the original film producer David Kirschner executive producing. The following month, Midler said she was not fond of the idea of a remake and she would not be taking part in it.
In July 2018, a book titled Hocus Pocus and the All-New Sequel was released, containing a novelization of the film and a sequel story. The sequel focuses on Max and Allison's daughter, Poppy, who grew up hearing the family story of the first film and parents who avoid Halloween as much as possible. Poppy is skeptical of the tale and ends up in the Sanderson house on Halloween, twenty-five years to the day after the film, in an attempt to prove there is nothing to the story.
In October 2019, a sequel was announced to be in development as a Disney+ exclusive film, with a screenplay written by Jen D'Angelo. Shortly after the report, Midler, Parker, and Najimy all confirmed their interest in reprising their roles. In March 2020, Adam Shankman signed on to direct. In December 2020, it was officially announced that the film would be premiering on Disney+. In April 2021, Anne Fletcher replaced Shankman as the director. Production was scheduled to begin in the summer of 2021, in Salem, Massachusetts. In May 2021, Adam Shankman teased some big news about film and that he would still be involved with it despite no longer being the director. He later confirmed that Fletcher would serve as director and that he would instead remain attached to the film as an executive producer, due to his duties as director of Disenchanted.
In May 2021, it was confirmed that Midler, Parker, and Najimy will reprise their roles as the Sanderson Sisters and the film would be released in 2022. In September 2021, sets for Hocus Pocus 2 were confirmed being built in Lincoln, Rhode Island. In October 2021, it was announced John Debney, the composer of the original film, is set to return to score the sequel. Shortly afterwards, it was announced that Taylor Henderson had been cast as one of the three leads.
Filming began on October 18, 2021, in Providence, Rhode Island.
See also
List of American films of 1993
References
External links
1993 films
1990s fantasy-comedy films
1990s children's comedy films
American children's comedy films
American children's fantasy films
American films
American fantasy-comedy films
American zombie comedy films
American comedy films
American supernatural horror films
English-language films
Films about cats
Films about curses
American films about Halloween
Films about potions
Films about witchcraft
Films directed by Kenny Ortega
Films featuring hypnosis
Films produced by David Kirschner
Films scored by James Horner
Films scored by John Debney
Films set in the 1690s
Films set in 1993
Films set in Massachusetts
Films set in the Thirteen Colonies
Films shot in California
Films shot in Los Angeles
Films shot in Massachusetts
Resurrection in film
Salem witch trials in fiction
Films about Satanism
Walt Disney Pictures films
Films with screenplays by Mick Garris
Films with screenplays by David Kirschner
1993 comedy films
Films about virginity
Films produced by Steven Haft
Films about sisters |
7082492 | https://en.wikipedia.org/wiki/Agentless%20data%20collection | Agentless data collection | In the field of information technology, agentless data collection involves collecting data from computers without installing any new agents on them.
What is an agent?
For the purpose of this discussion, an agent is a software program (sometimes called a service or daemon) that runs on a computer with the primary purpose of collecting information and pushing it over the network to a central location (or else of re-publishing the information in a standard format like SNMP so that it can then be collected over the network from the central location).
The traditional approach to data collection involves installing agents on
all computers from which data is needed. Sometimes this installation step
is performed manually for each computer, other times it is automated
via a centralized installation server that pushes software to other computers.
In either case, the cost of installation (and subsequent maintenance and upgrade)
is typically proportional to the number of computers that require installation
services, and this is in turn equal to the number of computers from which data
is needed.
Agentless approach
In the agentless approach, data is collected from computers without installing
additional agents. This is accomplished by obtaining data from the software that
is already installed on the computer including the operating system
as well as previously-installed commercial products (or commercial products which do not require an installation to execute). It turns out that, in many cases, there are already more than enough programs and protocols installed on a computer where the desired information can be obtained.
The primary benefit of the agentless approach is that it is not necessary to install, upgrade and maintain additional software programs on each computer from which information is needed. Software products that use this approach may have a faster rollout and lower TCO than software products that require agents on a substantial number of computers.
Relevant network protocols
Any network protocol that returns useful information can be employed, providing only that the
protocol server is already installed. Again, the distinction between agentless
and agent-based is not the specific protocol used but whether a new protocol server
(agent) must be installed.
In many cases, it is possible to find servers for these protocols: log4j, CIFS, SSH, SNMP, Windows Management Instrumentation (for Windows platform), DTrace (for Solaris 10 platform). However, a large number of other protocols may be helpful as well.
Versus data mining
The meaning of the phrase data mining is related to but different from data collection.
The former is typically about finding useful patterns with data that is conveniently
accessible in a relational database. In contrast, the latter involves extracting data
from a variety of less convenient sources, although in some cases it may also involve
identifying or leveraging useful patterns.
See also
Data Mining
Text Mining
Internet Protocol based network software
Data collection |
28814343 | https://en.wikipedia.org/wiki/Crosby%20Garrett%20Helmet | Crosby Garrett Helmet | The Crosby Garrett Helmet is a copper alloy Roman cavalry helmet dating from the late 2nd or early 3rd century AD. It was found by an unnamed metal detectorist near Crosby Garrett in Cumbria, England, in May 2010. Later investigations found that a Romano-British farming settlement had occupied the site where the helmet was discovered, which was located a few miles away from a Roman road and a Roman army fort. It is possible that the owner of the helmet was a local inhabitant who had served with the Roman cavalry.
The helmet appears to have been deliberately folded up and deposited in an artificial stone structure. It is thought to have been used for ceremonial occasions rather than for combat, and may already have been an antique by the time it was buried. It is of the same type as the Newstead Helmet (found in 1905) and its design also has similarities with the Ribchester Helmet (found in 1796) and the Hallaton Helmet (found in 2000), though its facial features are more akin to those of helmets found in southern Europe. Its design may allude to the Trojans, whose exploits the Romans re-enacted in cavalry tournaments.
Ralph Jackson, Senior Curator of Romano-British Collections at the British Museum, has described the helmet as "... an immensely interesting and outstandingly important find ... Its face mask is both extremely finely wrought and chillingly striking, but it is as an ensemble that the helmet is so exceptional and, in its specifics, unparalleled. It is a find of the greatest national (and, indeed, international) significance."
On 7 October 2010, the helmet was sold at Christie's for £2.3 million (US$3.6 million) to an undisclosed private buyer. Tullie House Museum and Art Gallery in Carlisle sought to purchase the helmet with the support of the British Museum, but was outbid. The helmet has so far been publicly displayed four times, once in a 2012 exhibition at the Royal Academy of Arts, at Tullie House in 2013–14, followed by display at the British Museum in 2014. The helmet returned to Tullie House to be displayed in the Hadrian's Cavalry exhibition in the summer of 2017.
Description
The Crosby Garrett helmet is an almost complete example of a two-piece Roman cavalry helmet. The visor portrays the face of a youthful, clean-shaven male with curly hair. The headpiece is in the shape of a Phrygian cap, on the crest of which is a winged griffin that stands with one raised foot resting on an amphora. The visor was originally attached to the headpiece by means of a hinge; the iron hinge pin has not survived, but its existence has been inferred from the presence of powdery deposits of iron oxide residue. The helmet would have been held in place using a leather strap attached from the wearer's neck to a decorated rivet on either side of the helmet, below the ear. Wear marks caused by opening and closing the visor are still visible, and at some point the helmet was repaired using a bronze sheet which was riveted across two splits. Only two other Roman helmets complete with visors have been found in Britain – the Newstead Helmet and Ribchester Helmet.
The helmet and visor were cast from an alloy consisting of an average of 82% copper, 10% zinc and 8% tin. This alloy was probably derived from melted-down scrap brass with a low zinc content, with which some tin had been added to improve the quality of the casting. Some of the fragments show traces of a white metal coating, indicating that the visor would originally have been tinned to give the appearance of silver. The griffin was cast separately from a different alloy consisting of 68% copper, 4% zinc, 18% tin and 10% lead. The visor would originally have been a silver hue and the helmet would have had a coppery yellow appearance. The helmet's creation can be dated to the late 2nd or early 3rd century from the use of a particular type of decorated rivet as well as some of its design features, such as its pierced eyes.
There has been much debate about the symbolic meaning of the helmet's design. The griffin was the companion of Nemesis, the goddess of vengeance and fate. They were both seen as agents of death and were often linked with gladiatorial combat. The meaning of the face and headpiece are less clearly identifiable. Suggestions have ranged from the Greek god Attis and the hero Perseus, to the Roman gods Mithras and Jupiter Dolichenus, to a more general Eastern Mediterranean appearance that could possibly have been meant to suggest a Trojan identity. The Phrygian cap was often used by the Romans as a visual motif representing the Trojans. Another interpretation believes that it could be an Amazon due to the Phrygian hat and the tap.
Discovery and restoration
The helmet and visor were found in May 2010 in pastureland on a farm owned by Eric Robinson at Crosby Garrett in Cumbria. The finder, an unnamed metal detectorist in his 20s from Peterlee, County Durham, had been detecting with his father in two adjacent fields for some years but had previously only discovered some Roman coins and other small artefacts. The findspot is situated not far from a Roman road. A number of earthworks are located nearby, indicating the presence of a previously unrecorded ancient settlement. The area was strategically placed on the route to the northern frontier of Roman Britain within the territory of the Carvetii tribe. The Roman army would have been present in the area and would certainly have used the nearby road. A Roman auxiliary fort stood only to the north-east at Verterae (Brough Castle).
Following the helmet's discovery, the area around the findspot was investigated in a project sponsored by the Tullie House Museum and Art Gallery and the Portable Antiquities Scheme. The earthworks noted earlier were found to be part of a substantial enclosure surrounded by ditches, within which buildings had once stood. The enclosure, which measures as much as long on its southern side, combines both native British and Roman methods of fortification. A sunken area within the enclosure may possibly have served as a paddock for horses, while the evidence for the buildings is concentrated in the enclosure's northern portion. The remnants of Romano-British field systems in the surrounding area show that the area was under cultivation and animal remains found on the site indicate that the inhabitants also raised livestock, including sheep, goats and pigs. The presence of Roman pottery suggests that the inhabitants had adopted some elements of the Roman lifestyle, but their community may well have been there long before the Romans arrived. Archaeological evidence from the enclosure indicates that the site may have been first settled as far back as the Bronze Age, at least 1,000 years before the helmet was deposited.
The finder discovered the helmet and visor buried together some 25 cm (10 in) below the surface, at a site located on a ledge at the lower end of the settlement. It had been placed onto two stone slabs at the bottom of a hole which had been back-filled with soil. A stone cap had been laid on top. The helmet was found in 33 large fragments and 34 small fragments and had apparently been folded before burial. The visor was mostly intact and had been placed face down. The griffin had become detached and was found with the helmet. No other artefacts were found at the time, but the subsequent Tullie House/PAS excavations at the findspot discovered a number of copper and iron objects, a bead and two Roman coins dating to 330–337. The coins were found within the artificial stone feature in which the helmet had been deposited and may have been buried at the same time.
The finder did not initially realise that he had found a Roman artefact and thought at first that it was a Victorian ornament. He eventually identified it as Roman by consulting auction catalogues, searching the Internet and getting advice from dealers. Find Liaison Officers from the Portable Antiquities Scheme were notified of the discovery and visited the findspot along with the finder. Christie's commissioned Darren Bradbury, an independent conservator and restorer, to restore the helmet and visor for sale. Although Christie's was asked to delay the restoration so that a full scientific examination could be carried out, this request was not granted and information about the helmet's burial may have been lost as a result. However, the British Museum was able to inspect the find during restoration and X-ray fluorescence spectrometry was carried out to determine the composition of the headpiece, visor and griffin. Bradbury's restoration work took some 240 hours and involved the repair of cracks and holes using resin and cyanoacrylate ("Super Glue") , retouched to match the appearance of the surrounding material.
Similarities and usage
The helmet and visor have marked similarities to a number of other Roman cavalry helmets. The visor is a cavalry sports type C (H. Russell Robinson classification) or type V (Maria Kohlert classification). Similar examples have been found across the Roman Empire from Britain to Syria. It is of the same type as the Newstead Helmet, found in Scotland in 1905, and its facial features most closely parallel a helmet that was found at Nola in Italy and is now in the British Museum. The rendering of the hair is similar to that of a type C helmet found at Belgrade in Serbia and dated to the 2nd century AD. The griffin ornament is unique, though it may parallel a lost "sphinx of bronze" that may originally have been attached to the crest of the Ribchester Helmet, discovered in Lancashire in 1796. The headpiece is nearly unique; only one other example in the form of a Phrygian cap has been found, in a fragmentary state, at Ostrov in Romania, dated to the second half of the 2nd century AD. Rings on the back of the helmet and on the griffin may have been used to attach colourful streamers or ribbons.
Such helmets were used for hippika gymnasia, cavalry tournaments that were performed in front of emperors and senior commanders. Horses and riders wore lavishly decorated clothes, armour and plumes while performing feats of horsemanship and re-enacting historical and legendary battles, such as the wars of the Greeks and Trojans. According to the Roman writer Arrian:
Combat gear was issued by and belonged to the Roman army, and had to be returned at the end of a wearer's service. Cavalry sports equipment appears to have been treated differently, as soldiers apparently privately commissioned and purchased it for their own use. They evidently retained it after they completed their service. Both helmets and visors have been found in graves and other contexts away from obvious military sites, as well as being deposited in forts and their vicinity. In some cases they were carefully folded up and buried, as in the case of the Guisborough Helmet. The Dutch historian Johan Nicolay has identified a "lifecycle" for Roman military equipment in which ex-soldiers took certain items home with them as a reminder of their service and occasionally disposed of them away from garrison sites as grave goods or votive offerings.
The circumstances in which the Crosby Garret helmet was buried are still unclear, but the discoveries made by the post-discovery Tullie House/PAS excavations have provided much more detail about its context. It was clearly deposited within an artificial feature that had been specially constructed; Stuart Noon of the Museum of Lancashire suggests that the feature may have been intended as a memorial of some sort. It was not buried in an isolated spot but within a long-occupied Romano-British farming settlement that had clearly adopted aspects of Roman culture. Given the settlement's proximity to Roman military locations, it is very possible that some of its inhabitants served with the Roman army, which often recruited mounted auxiliaries from among native peoples. The helmet may well already have been a valuable antique at the time of its burial; if the coins found nearby reflect when it was buried, it could have been over a century old by the time it was deposited. It was deliberately broken before being buried in what may have been intended as a ritual sacrifice. The identity of its owner will never be known, but it could have been that a local inhabitant who had formerly served with the Roman cavalry was responsible for the helmet's deposition.
Auction and controversy
Although the find was reported under the Portable Antiquities Scheme, it was not declared treasure under the 1996 Treasure Act because single items of non-precious metal are not covered by the act. The finder and landowner were thus free to dispose of the helmet as they saw fit. The discovery was publicly announced by Christie's in mid-September 2010; the helmet was the centrepiece of its 7 October auction catalogue, featuring on the cover and six more pages. Its value was put at £200,000 – £300,000. The Tullie House Museum and Art Gallery launched an appeal with the aim of purchasing the helmet and making it the focus of a new Roman frontier gallery due to open in 2011. The campaign immediately attracted numerous donations, including £50,000 from an anonymous overseas benefactor who offered the sum if a matching sum could be raised by the public (it was); a £1 million offer from the National Heritage Memorial Fund; a £300,000 pledge from the Headley Trust and the Monument Trust; £200,000 from the Art Fund; and £75,000 from the J Paul Getty Jr Charitable Trust. By the time of the auction three and a half weeks after the campaign had been launched, the museum had raised enough money to support a bid of up to £1.7 million. Behind the scenes, efforts were made to persuade the finder and landowner to agree a private sale with the museum, but these approaches failed.
The initial estimate was passed within seconds of the auction opening. Six bidders pushed the price towards a million pounds and Tullie House was forced to drop out at £1.7 million. Two remaining bidders took the bid past £2 million; the winning bidder, an anonymous UK resident and fine art collector bidding by phone, paid a total of £2,330,468.75 including the buyer's premium and VAT. The outcome aroused controversy and prompted calls for the Treasure Act to be revised, though British Archaeology noted that the circumstances of the helmet's discovery may have resulted in it being outside the scope of even a revised act. It is still possible that the helmet could come into public ownership; if the winning bidder wishes to export it, an export licence would have to be applied for and if a temporary export bar was placed on it an opportunity could arise for funds to be raised by a public institution to purchase the helmet.
Display
Since its sale in 2010, the helmet has been on public display four times. It was lent by its owner to the Royal Academy of Arts in London, and was put on display from 15 September to 9 December 2012 as part of an exhibition of bronzes. From 1 November 2013 until 26 January 2014 the helmet was on display at the Tullie House Museum and Art Gallery in Carlisle, and a printed guide was produced for the occasion. It was subsequently displayed at the British Museum from 28 January to 27 April 2014. The helmet returned to Tullie House to be part of their exhibition for Hadrian's Cavalry, an exhibition spanning ten sites along Hadrian's Wall from April to September 2017.
References
External links
Roman Cavalry Sports helmet from Crosby Garrett, Cumbria by Dr Ralph Jackson
Roman Helmet Appeal
Christie's sale catalogue
"Exceptional Roman cavalry helmet discovered in Cumbria". Daniel Pett, Portable Antiquities Scheme, 13 September 2010
2nd-century works
3rd-century works
2010 archaeological discoveries
Ancient Roman helmets
Archaeological artifacts
Metal detecting finds in England
Roman archaeology
Individual helmets
Roman Armour from Britain |
169364 | https://en.wikipedia.org/wiki/Fluxbox | Fluxbox | Fluxbox is a stacking window manager for the X Window System, which started as a fork of Blackbox 0.61.1 in 2001, with the same aim to be lightweight. Its user interface has only a taskbar, a pop-up menu accessible by right-clicking on the desktop, and minimal support for graphical icons. All basic configurations are controlled by text files, including the construction of menus and the mapping of key-bindings. Fluxbox has high compliance to the Extended Window Manager Hints specification.
Fluxbox is basic in appearance, but it can show a few options for improved attractiveness: colors, gradients, borders, and several other basic appearance attributes can be specified. Recent versions support rounded corners and graphical elements. Effects managers such as xcompmgr, cairo-compmgr and transset-df (deprecated) can add true transparency to desktop elements and windows. Enhancements can also be provided by using iDesk or fbdesk, SpaceFM, PCMan File Manager or the ROX Desktop. Fluxbox also has several features Blackbox lacks, including tabbed windows and a configurable titlebar.
Because of its small memory footprint and quick loading time, Fluxbox is popular in many Live CDs such as GParted. It was the default window manager of Damn Small Linux and antiX, but was replaced with JWM in 2007 and 2009, respectively. It is currently the default window manager of PCFluxboxOS, a remaster of PCLinuxOS, and of Linux Mint Fluxbox CE. Fluxbuntu, an Ubuntu derivative with lightweight applications, was released in October 2007.
On December 12, 2019, MX Linux released MX-fluxbox as a fully integrated overlay of MX Linux 19. Previously it had been available from 2014 onward through the Package Installer. A Fluxbox edition has been added to the MX-21 series with Fluxbox in use by default. Fluxbox is also a featured window manager on antiX.
The early versions of Lumina, a desktop environment created for TrueOS, were based on Fluxbox.
As of December 2021 there are 22 flavors of Linux using Fluxbox in some way.
Features
Right-clicking on the desktop gives a root menu
Customizable root menu
Support for wallpaper
Running applications appear in a taskbar
Support for desktop themes
Customizable keyboard shortcuts (Key-bindings)
Window tabbing
Slit for applications such as system monitors
Customization
Customization is done by editing configuration files in the .fluxbox subdirectory in the user's home directory:
Keyboard shortcuts are stored in the file.
Menu layout is in the file.
Everything that's run at startup is kept in the file.
The fluxbox configuration file is held at .
See also
Openbox
Comparison of X window managers
Notes
References
Further reading
External links
Fluxbox themes at Customize.org
antiX Website at antiXlinux.com
Useful tools
Fbsetbg : Set wallpaper for Fluxbox.
FluxSpace : A window manager and workspace enhancer and integrator.
Fluxter : A slit pager.
iDesk : a desktop icon utility
iPager : A lightweight pager.
Articles containing video clips
Free software programmed in C++
Free X window managers
Software forks
Software using the MIT license |
20847621 | https://en.wikipedia.org/wiki/Universal%20Darwinism | Universal Darwinism | Universal Darwinism (also known as generalized Darwinism, universal selection theory,
or Darwinian metaphysics) refers to a variety of approaches that extend the theory of Darwinism beyond its original domain of biological evolution on Earth. Universal Darwinism aims to formulate a generalized version of the mechanisms of variation, selection and heredity proposed by Charles Darwin, so that they can apply to explain evolution in a wide variety of other domains, including psychology, linguistics, economics, culture, medicine, computer science and physics.
Basic mechanisms
At the most fundamental level, Charles Darwin's theory of evolution states that organisms evolve and adapt to their environment by an iterative process. This process can be conceived as an evolutionary algorithm that searches the space of possible forms (the fitness landscape) for the ones that are best adapted. The process has three components:
variation of a given form or template. This is usually (but not necessarily) considered to be blind or random, and happens typically by mutation or recombination.
selection of the fittest variants, i.e. those that are best suited to survive and reproduce in their given environment. The unfit variants are eliminated.
heredity or retention, meaning that the features of the fit variants are retained and passed on, e.g. in offspring.
After those fit variants are retained, they can again undergo variation, either directly or in their offspring, starting a new round of the iteration. The overall mechanism is similar to the problem-solving procedures of trial-and-error or generate-and-test: evolution can be seen as searching for the best solution for the problem of how to survive and reproduce by generating new trials, testing how well they perform, eliminating the failures, and retaining the successes.
The generalization made in "universal" Darwinism is to replace "organism" by any recognizable pattern, phenomenon, or system. The first requirement is that the pattern can "survive" (maintain, be retained) long enough or "reproduce" (replicate, be copied) sufficiently frequently so as not to disappear immediately. This is the heredity component: the information in the pattern must be retained or passed on. The second requirement is that during survival and reproduction variation (small changes in the pattern) can occur. The final requirement is that there is a selective "preference" so that certain variants tend to survive or reproduce "better" than others. If these conditions are met, then, by the logic of natural selection, the pattern will evolve towards more adapted forms.
Examples of patterns that have been postulated to undergo variation and selection, and thus adaptation, are genes, ideas (memes), theories, technologies, neurons and their connections, words, computer programs, firms, antibodies, institutions, law and judicial systems, quantum states and even whole universes.
History and development
Conceptually, "evolutionary theorizing about cultural, social, and economic phenomena" preceded Darwin, but was still lacking the concept of natural selection. Darwin himself, together with subsequent 19th-century thinkers such as Herbert Spencer, Thorstein Veblen, James Mark Baldwin and William James, was quick to apply the idea of selection to other domains, such as language, psychology, society, and culture. However, this evolutionary tradition was largely banned from the social sciences in the beginning of the 20th century, in part because of the bad reputation of social Darwinism, an attempt to use Darwinism to justify social inequality.
Starting in the 1950s, Donald T. Campbell was one of the first and most influential authors to revive the tradition, and to formulate a generalized Darwinian algorithm directly applicable to phenomena outside of biology. In this, he was inspired by William Ross Ashby's view of self-organization and intelligence as fundamental processes of selection. His aim was to explain the development of science and other forms of knowledge by focusing on the variation and selection of ideas and theories, thus laying the basis for the domain of evolutionary epistemology. In the 1990s, Campbell's formulation of the mechanism of "blind-variation-and-selective-retention" (BVSR) was further developed and extended to other domains under the labels of "universal selection theory" or "universal selectionism" by his disciples Gary Cziko, Mark Bickhard, and Francis Heylighen.
Richard Dawkins may have first coined the term "universal Darwinism" in 1983 to describe his conjecture that any possible life forms existing outside the solar system would evolve by natural selection just as they do on Earth. This conjecture was also presented in 1983 in a paper entitled “The Darwinian Dynamic” that dealt with the evolution of order in living systems and certain nonliving physical systems. It was suggested “that ‘life’, wherever it might exist in the universe, evolves according to the same dynamical law” termed the Darwinian dynamic. Henry Plotkin in his 1997 book on Darwin machines makes the link between universal Darwinism and Campbell's evolutionary epistemology. Susan Blackmore, in her 1999 book The Meme Machine, devotes a chapter titled 'Universal Darwinism' to a discussion of the applicability of the Darwinian process to a wide range of scientific subject matters.
The philosopher of mind Daniel Dennett, in his 1995 book Darwin's Dangerous Idea, developed the idea of a Darwinian process, involving variation, selection and retention, as a generic algorithm that is substrate-neutral and could be applied to many fields of knowledge outside of biology. He described the idea of natural selection as a "universal acid" that cannot be contained in any vessel, as it seeps through the walls and spreads ever further, touching and transforming ever more domains. He notes in particular the field of memetics in the social sciences.
In agreement with Dennett's prediction, over the past decades the Darwinian perspective has spread ever more widely, in particular across the social sciences as the foundation for numerous schools of study including memetics, evolutionary economics, evolutionary psychology, evolutionary anthropology, neural Darwinism, and evolutionary linguistics. Researchers have postulated Darwinian processes as operating at the foundations of physics, cosmology and chemistry via the theories of quantum Darwinism, observation selection effects and cosmological natural selection. Similar mechanisms are extensively applied in computer science in the domains of genetic algorithms and evolutionary computation, which develop solutions to complex problems via a process of variation and selection.
Author D. B. Kelley has formulated one of the most all-encompassing approaches to universal Darwinism. In his 2013 book The Origin of Phenomena, he holds that natural selection involves not the preservation of favored races in the struggle for life, as shown by Darwin, but the preservation of favored systems in contention for existence. The fundamental mechanism behind all such stability and evolution is therefore what Kelley calls "survival of the fittest systems." Because all systems are cyclical, the Darwinian processes of iteration, variation and selection are operative not only among species but among all natural phenomena both large-scale and small. Kelley thus maintains that, since the Big Bang especially, the universe has evolved from a highly chaotic state to one that is now highly ordered with many stable phenomena, naturally selected.
Examples of universal Darwinist theories
The following approaches can all be seen as exemplifying a generalization of Darwinian ideas outside of their original domain of biology. These "Darwinian extensions" can be grouped in two categories, depending on whether they discuss implications of biological (genetic) evolution in other disciplines (e.g. medicine or psychology), or discuss processes of variation and selection of entities other than genes (e.g. computer programs, firms or ideas). However, there is no strict separation possible, since most of these approaches (e.g. in sociology, psychology and linguistics) consider both genetic and non-genetic (e.g. cultural) aspects of evolution, as well as the interactions between them (see e.g. gene-culture coevolution).
Gene-based Darwinian extensions
Evolutionary psychology assumes that our emotions, preferences and cognitive mechanisms are the product of natural selection
Evolutionary educational psychology applies evolutionary psychology to education
Evolutionary developmental psychology applies evolutionary psychology to cognitive development
Darwinian Happiness applies evolutionary psychology to understand the optimal conditions for human well-being
Darwinian literary studies tries to understand the characters and plots of narrative on the basis of evolutionary psychology
Evolutionary aesthetics applies evolutionary psychology to explain our sense of beauty, especially for landscapes and human bodies
Evolutionary musicology applies evolutionary aesthetics to music
Evolutionary anthropology studies the evolution of human beings
Sociobiology proposes that social systems in animals and humans are the product of Darwinian biological evolution
Human behavioral ecology investigates how human behavior has become adapted to its environment via variation and selection
Evolutionary epistemology of mechanisms studies how our abilities to gather knowledge (perception, cognition) have evolved
Evolutionary medicine investigates the origin of diseases by looking at the evolution both of the human body and of its parasites
Paleolithic diet proposes that the most healthy nutrition is the one to which our hunter-gatherer ancestors have adapted over millions of years
Paleolithic lifestyle generalizes the paleolithic diet to include exercise, behavior and exposure to the environment
Molecular evolution studies evolution at the level of DNA, RNA and proteins
Biosocial criminology studies crime using several different approaches that include genetics and evolutionary psychology
Evolutionary linguistics studies the evolution of language, biologically as well as culturally
Other Darwinian extensions
Quantum Darwinism sees the emergence of classical states in physics as a natural selection of the most stable quantum properties
Cosmological natural selection hypothesizes that universes reproduce and are selected for having fundamental constants that maximize "fitness"
Complex adaptive systems models the dynamics of complex systems in part on the basis of the variation and selection of its components
Evolutionary computation is a Darwinian approach to the generation of adapted computer programs
Genetic algorithms, a subset of evolutionary computation, models variation by "genetic" operators (mutation and recombination)
Evolutionary robotics applies Darwinian algorithms to the design of autonomous robots
Artificial life uses Darwinian algorithms to let organism-like computer agents evolve in a software simulation
Evolutionary art uses variation and selection to produce works of art
Evolutionary music does the same for works of music
Clonal selection theory sees the creation of adapted antibodies in the immune system as a process of variation and selection
Neural Darwinism proposes that neurons and their synapses are selectively pruned during brain development
Evolutionary epistemology of theories assumes that scientific theories develop through variation and selection
Memetics is a theory of the variation, transmission, and selection of cultural items, such as ideas, fashions, and traditions
Dual inheritance theory a framework for cultural evolution developed largely independently of memetics
Cultural selection theory is a theory of cultural evolution related to memetics
Cultural materialism is an anthropological approach that contends that the physical world impacts and sets constraints on human behavior.
Environmental determinism is a social science theory that proposes that it is the environment that ultimately determines human culture.
Evolutionary economics studies the variation and selection of economic phenomena, such as commodities, technologies, institutions and organizations.
Evolutionary ethics investigates the origin of morality, and uses Darwinian foundations to formulate ethical values
Big History is the science-based narrative integrating the history of the universe, earth, life, and humanity. Scholars consider Universal Darwinism to be a possible unifying theme for the discipline.
Books
Campbell, John. Universal Darwinism: the path of knowledge.
Cziko, Gary. Without Miracles: Universal Selection Theory and the Second Darwinian Revolution.
Hodgson, Geoffrey Martin; Knudsen, Thorbjorn. Darwin's Conjecture: The Search for General Principles of Social and Economic Evolution.
Kelley, D. B. The Origin of Everything via Universal Selection, or the Preservation of Favored Systems in Contention for Existence.
Plotkin, Henry. Evolutionary Worlds without End.
Plotkin, Henry. Darwin Machines and the Nature of Knowledge.
Dennett, Daniel. Darwin's Dangerous Idea.
References
External links
UniversalDarwinism.com
UniversalSelection.com
Darwinism
Evolutionary biology
Evolution |
40920880 | https://en.wikipedia.org/wiki/Northern%20University%2C%20Nowshera | Northern University, Nowshera | The Northern University (NU) is a private non profit university funded by EDR Trust, located in Nowshera Cantonment, Khyber Pakhtunkhwa.
The inauguration of the university was done by the President of Northern University General (R) Sawar Khan.
Its main campus is located at Wattar Walai Ziarat, Kaka Sahib Road, Nowshera
Academic programs
Faculty of Engineering and Information Technology
It offers undergraduate programs in the fields of Electrical Engineering, Computer Science, Information Technology, and Electronics and postgraduate degrees in the field of Electronics, Computer Science and Information Technology.
Faculty of Administrative Sciences
The Faculty of Administrative Sciences offers undergraduate and graduate programs in Business Administration with a focus on core strategies in management and international business.
Faculty of Arts and Social Sciences
It offers four year undergraduate programs in Textile and Fashion Designing and Home Economics and postgraduate degrees in Economics, Education, English, Urdu and Journalism.
Faculty of Sciences
It is offering MSc degree in Mathematics and Environmental Science and MPhil degree in Environmental Science.
Campuses
The university had two campuses; however, after successfully completion of a asphalted road and state of the art security arrangements, it has shifted to one Main Campus. Main campus is located at Wattar Walai Ziarat Kaka Sahib Road, Nowshera.
The foundation stone for the main block was laid by Mr. Khalil Ur Rehman, Governor NWFP in April 2005.
Gallery
References
External links
NU official website
Pakistan Army universities and colleges
Universities and colleges in Nowshera District
Educational institutions established in 2002
2002 establishments in Pakistan
Private universities and colleges in Pakistan |
49381526 | https://en.wikipedia.org/wiki/Google%20OnHub | Google OnHub | Google OnHub is a residential wireless router product from Google, Inc. The two variants are manufactured by TP-Link and ASUS. Google's official tagline for the product is "We’re streaming and sharing in new ways our old routers were never built to handle. Meet OnHub, a router from Google that is built for all the ways you use Wi-Fi." In 2016, Google released the Google Wifi router with mesh networking, and combined its functionality and network administration with the OnHub so that OnHub and Google Wifi may both be used interchangeably in mesh networks.
Google touts the OnHub router as "easy to use and ready for the future" for its intuitive interface. According to OnHub specifications, both OnHub models are "Weave Ready" and "Bluetooth Smart Ready". The future enablement of these network protocols are possible as OnHub routers have an IEEE 802.15.4 radio antenna and a Bluetooth antenna. However, as of July 2020, the Bluetooth and 802.15.4 functionality have not been enabled.
OnHub routers have a dual-core 1.4 GHz CPU, 1GB RAM, and 4GB flash storage. Like Google Wifi, the OnHub creates a single SSID for both the 2.4 GHz and 5 GHz bands to simplify the Wi-Fi experience for the end user. The OnHub will automatically steer devices to connect to the band with the best connection.
In December 2021, Google announced that OnHub routers would no longer receive any software or security updates. After December 19, 2022, other features would be disabled like updating the Wi-Fi network settings, adding additional Wi-Fi devices, running speed tests, or using Google Assistant.
Product comparison
The OnHub router from TP-Link is available in black or blue. The TP-Link router also has a removable exterior shell that is interchangeable other color options to help the OnHub aesthetically fit in various environments. The OnHub router from ASUS is available in Slate Gray. There are also non-aesthetic features unique to each model. The TP-Link model features a "specialized reflector" for an internal antenna that can be adjusted to extend Wi-Fi range on the 2.4 GHz band in one direction. The ASUS model has a "Wave Control" feature that allows users to prioritize a specific device on Wi-Fi by waving a hand over a light sensor on the top.
See also
Google Nest Wifi
References
Wireless networking hardware
OnHub |
42713890 | https://en.wikipedia.org/wiki/CSI%3A%20Cyber | CSI: Cyber | CSI: Cyber (Crime Scene Investigation: Cyber) is an American police procedural drama television series that premiered on March 4, 2015, on CBS. The series, starring Patricia Arquette and Ted Danson, is the third spin-off of CSI: Crime Scene Investigation and the fourth series in the CSI franchise. On May 12, 2016, CBS canceled the series after two seasons.
Plot
The series follows an elite team of FBI Special Agents tasked with investigating cyber crimes in North America. Based out of Washington, D.C., the team is supervised by Deputy Director Avery Ryan, an esteemed Ph.D. Ryan is a behavioral psychologist turned "cyber shrink" who established the FBI Cyber Crime division and heads a "hack-for-good" program, a scheme in which the criminals she catches can work for her in lieu of receiving a prison sentence. Ryan works with D.B. Russell, a left-coast Sherlock Holmes and career Crime Scene Investigator who joins the team after a stint as Director of the Las Vegas Crime Lab. Together, Russell and Ryan head a team including Elijah Mundo, Daniel Krumitz (aka Krummy), Raven Ramirez, and Brody Nelson, who work to solve Internet-related murders, cyber theft, hacking, sexual offenses, blackmail, and any other crime deemed to be cyber-related within the FBI's jurisdiction.
Cast and characters
Main
Patricia Arquette as Dr. Avery Ryan, a Deputy Director of the FBI. Avery was a renowned psychologist in New York when her professional database was hacked, resulting in the death of a patient. This, coupled with the recent death of her daughter, led Avery to leave New York, and found the FBI's Cyber Crime Division. As Special Agent in Charge, Avery heads a team of former cyber criminals and federal agents who travel nationwide in search of the criminals who work on the dark net. After several years working under Simon Sifter, Ryan was promoted to Deputy Director of the FBI. Following the death of her daughter, Ryan divorced her husband, though the two later reconnect and begin a romantic relationship. Following the dissolution of her hack-for-good program, Avery begins recruiting young hackers to work within CTOC (Cyber Threat Operations Center).
James Van Der Beek as Elijah Mundo, a Senior FBI Field Agent. Assigned to Ryan's Cyber division, Mundo is a former U.S. Marine, and an expert in battlefield forensics, weaponry, vehicles, and bombs. After a short separation from his wife, Elijah filed for divorce, though the two have since reconciled. He is an avid gamer, and because of this he is all too aware of the dangers that lurk on the internet. As a field agent, Mundo is much more hands on than his counterparts in detaining and interviewing suspects. During the second season, Mundo's father was diagnosed with cancer, causing him to confide in a barmaid who later begins stalking him. In "Legacy", she is shot and killed by Russell. Elijah has one daughter.
Peter MacNicol as Simon Sifter (season 1), an FBI Assistant Deputy Director. Simon works out of the FBI Headquarters in Washington and oversees Ryan and her Cyber division. Working his way up through the ranks investigating homicides, drive-bys, and gang warfare, Sifter has built up an array of contacts and connections that are invaluable to the FBI. He often acts as the clearinghouse between FBI Cyber and its intergovernmental counterparts. He is married with at least one child. Sometime following the first season, Sifter vacated his position.
Shad Moss as Brody Nelson, an FBI analyst. Brody is a former black hat hacker who was caught and later recruited by FBI Cyber as part of Ryan's "hack-for-good" program. After avoiding a prison sentence at the FBI's behest, the team are initially dubious of Brody's intentions. After struggling to sever his ties with the world of cyber crime, Nelson outs himself to the hacking community and proves himself to be a loyal and valued analyst. The team nicknamed him "Baby-Face". He is shown to be arachnophobic. During season two, Nelson discovers that the FBI obtained evidence against him illegally, and as such his conviction is overturned. He later undergoes training at Quantico, and returns to Cyber as an agent.
Charley Koontz as Daniel Krumitz, a Special Agent. Daniel notes his desire to join law enforcement began when his parents were murdered. He is a skilled analyst (Avery claims he is the best white hat hacker in the world), and he is also brutally honest. He is a quick-witted introvert who specializes in technology, though he is also a trained field agent. He has one sister, named Francine, and has strong bonds with both Nelson and Ryan. Krumitz is often referred to by his nickname "Krummy".
Hayley Kiyoko as Raven Ramirez, an FBI analyst. Raven, like Nelson, is a former black hat hacker recruited by Ryan as part of her "hack-for-good" program. At the start of the first season, she had been with the FBI for two years, and had become a specialist in social media investigations, international relations and cyber trends. Raven stated that, as a victim of cyber-bullying, she turned to hacking as a refuge from her tormentors. Raven is a trusted member of the team. Her hacker alias was Eclipse. After Avery's hack-for-good program is dissolved, Raven is awarded time-served, and thus spared prison. She later rejoins the FBI as a consultant specialist.
Ted Danson as D.B. Russell (season 2), the Director of Next Generation Cyber Forensics. D.B. is described as a "left-coast Sherlock Holmes", the son of hippies and a keen forensic botanist. As a trained Crime Scene Investigator, Russell joins Cyber following a four-year stint as Director of the Las Vegas Crime Lab looking for a fresh start following a divorce and the death of his "soulmate" Julie Finlay. Avery jokes that she offered Russell a job because of his selection of tea leaves and often mocks his Zen characteristics. Russell has four children and one granddaughter and, since joining Cyber, has expressed an interest in dating, He has formed close bonds with Daniel Krumitz and Brody Nelson. Russell later begins a relationship with Greer Latimore, then tenders his resignation in order to move to Paris. During the second-season finale, he is shot, but recovers from his injuries.
Recurring
Brent Sexton as Andrew Michaels, Avery's ex-husband. Andrew and she were married with a young daughter. Following the loss of their child, the two separated. In the season one finale, he notes that he has not seen Avery since she joined the FBI. During season two, he reconnects with Ryan after a hacker files a false death certificate in his name. Despite being engaged to another woman, he and Avery begin a romantic relationship once again.
Kelly Preston as Greer Latimore, a former Secret Service agent turned private investigator. Greer first becomes known to the Cyber team after D.B. meets her in a bar. They later begin a romantic relationship and, after Greer is offered a job in Paris, D.B. tenders his resignation.
Michael Irby as David Ortega, a U.S. Naval captain. David is a medical doctor stationed out of Washington, often seen assisting Avery Ryan's team's investigations. He is a trained medical examiner.
Angela Trimbur as Francine Krumitz, Daniel's sister. Francine is hung up on the death of her parents. Although initially better at hiding it than her brother, she later shoots and kills their parents' murderer.
Mckenna Grace as Michelle Mundo, Elijah's daughter. Referred to as "Mitchie" by her family, Michelle currently lives with her mother Devon, though she maintains a strong relationship with Elijah. She is initially the focus of a custody dispute.
Alexie Gilmore as Devon Atwood, Elijah's wife. Devon and Elijah had, during the first season, recently separated. She is still very much in love with Elijah, as he is with her. The two decide to reconcile their romance, but keep it a secret from their daughter.
Marcus Giamatti as Artie Sneed, a quirky amateur technology expert who occasionally helps the cyber unit with cases.
Sean Blakemore as Director Marcus Silver, Avery's boss.
Diogo Morgado as Miguel Vega: an Interpol agent, Avery's friend, who helped the team to stop Python.
Episodes
Introductory episodes
Season 1 (2015)
The first season of CSI: Cyber is headlined by Patricia Arquette, as Special Agent Avery Ryan. James Van Der Beek, Peter MacNicol, Shad Moss, Charley Koontz, and Hayley Kiyoko also star.
Season 2 (2015–16)
The second season of CSI: Cyber is headlined by Patricia Arquette, as Deputy Director Avery Ryan, and Ted Danson as Director D.B. Russell. James Van Der Beek, Shad Moss, Charley Koontz, and Hayley Kiyoko also star.
Production
Development
On February 18, 2014, CBS announced plans to launch a new spin-off of the franchise titled CSI: Cyber. Deadline.com reported that the series would focus on cyber investigations, as opposed to the forensic investigations seen in CSI, CSI: Miami, and CSI: NY, stating that "[Anthony E.] Zuiker has been at the forefront of entertainment’s digital conversion, experimenting in the arena for the past decade." Zuiker, who wrote digi-novel Level 26, spent time in Washington meeting with the CIA, FBI, and DOD as part of his research for his 2009 CBS project Cyber Crimes (which was not picked up to series and likely inspired CSI: Cyber). It was announced that the series would be based on the work of producer Mary Aiken, a pioneering cyber psychologist. The pilot episode was penned by Zuiker, Carol Mendelsohn, and Ann Donahue, and aired on April 30, 2014. CBS announced that it had officially picked up the series on May 10, 2014. The first season, comprising 13 episodes, premiered in March 2015. The second and final season consisted of 18 episodes.
The series is executive produced by creators Carol Mendelsohn, Anthony E. Zuiker, and Ann Donahue, former CSI: NY executive producer Pam Veasey (who acts as showrunner), Jonathan Littman, and Jerry Bruckheimer. Mary Aiken, on whom the show is based, is attached as a series producer. Peter MacNicol departed the main cast at the end of the first season, whilst CBS announced on May 11, 2015, that CSI: Cyber was renewed for a second season. On June 25, 2015, Moss confirmed in an interview on The Project that season 2 would include 22 episodes. Season 2 was reduced from 22 to 18 episodes, ending with the episode titled "Legacy".
Casting
On March 5, 2014, Patricia Arquette was cast as Special Agent Avery Ryan in a Spring episode of CSI. Ryan was described as being "tasked with solving high octane crimes that start out in the cyber world and play out in real life". Charley Koontz was the next actor to be cast, playing a character then named Daniel Krummitz, an Agent that " rarely, if ever, goes home". Peter MacNicol joined the cast on August 1, 2014, as Assistant Director Stavros Sifter, "a shrewd and savvy networker; a charmer with a hint of malice". Koontz and MacNicol's characters were later renamed Daniel Krumitz and Simon Sifter, respectively. These announcements were swiftly followed by the casting of James Van Der Beek as the male lead, in the role of Elijah Mundo. Mundo was described as "an expert in battlefield forensics recruited by Patricia Arquette’s character". Shad Moss announced his casting on August 20, 2014, via his Instagram account. He was later confirmed to be playing "Baby Face" Nelson. Rounding out the original cast was Hayley Kiyoko as Raven Ramirez, described as a character who "will possess a dark secret in her front story which Ryan won’t even know until it’s too late". Kiyoko was cast on October 29, 2014.
Following the cancellation of CSI: Crime Scene Investigation, it was announced that Ted Danson would be joining the Cyber cast as D.B. Russell, the newly appointed Director of Next Generation Cyber Forensics.
Filming
CSI: Cyber'''s primary photography takes place at CBS' Studio Center in the Los Angeles, California neighborhood of Studio City. Numerous outdoor scenes are filmed locally in the Los Angeles area, including Matteo Street, Spring Street, Main Street, the Arroyo Seco, and the Colorado Street Bridge.
Music
The series' theme song is "I Can See for Miles" by The Who, and the series' music composers are Jeff Russo and Ben Decter. Songs featured throughout the first season include: "Thunderbolt" by Justin Prime and Sidney Samson (episode 2), "Take a Ride" by Rattle Box (episode 4), "Re-Creation" by Strikez (episode 5), "With Me" by Underglow (episode 6), "Five Foot Two, Eyes of Blue" by Art Landry and His Orchestra (episode 8), "Get Your Hands Up" by Uforik (episode 9), "Let Go for Tonight" by Foxes (episode 10), and "How Sweet It Is (To Be Loved by You)" by Marvin Gaye (episode 12). The second season features music by Underglow ("Save the Day", episode 1), With Lions ("Jitterbug", episode 2), Savoy ("Pump it Up", episode 3) and Motabeatz ("Watch Your Back", episode 5). The show also features additional music by electronic music producer Nick Chiari, who produces under the alias Grabbitz.
Reception
Ratings
Critical response
The first season of CSI: Cyber received mixed reviews. On Rotten Tomatoes, the season has a rating of 35%, based on 31 reviews, with an average rating of 5.1/10. The site's critical consensus reads, "While stocked with impressive talent, CSI: Cyber fails to add anything truly new to the franchise, settling for a slightly modernized twist on the same typical crimefighting scenarios." On Metacritic, the season has a score of 45 out of 100, based on 23 critics, indicating "mixed or average reviews".
World record
Producers announced intentions to break the Guinness World Record for largest ever TV simulcast drama on March 4, 2015, with the episode "Kitty" airing in 150 countries in addition to digital streaming. They succeeded in breaking the record by airing CSI: Cyber''s backdoor pilot in 171 countries.
International broadcast
The series has been sold to Channel 5 in the United Kingdom, CTV in Canada, Rai 2 in Italy, Network Ten in Australia, Prime in New Zealand, RTÉ2 in Ireland, TF1 in France, AXN in Asia and Latin America, RTL 5 in The Netherlands, Nova in Bulgaria, Skai TV in Greece, HOT Zone in Israel, TV3 in Estonia, Kanal 5 in Sweden and Denmark, and MTV3 in Finland.
Home video releases
References
External links
2015 American television series debuts
2016 American television series endings
2010s American crime drama television series
2010s American mystery television series
2010s American police procedural television series
CBS original programming
English-language television shows
Television series about the Federal Bureau of Investigation
Television series by CBS Studios
Television shows set in Fairfax County, Virginia
Television shows set in Virginia
American television spin-offs
Malware in fiction
Television series created by Anthony E. Zuiker
Television series created by Carol Mendelsohn
Television series created by Ann Donahue
Works about computer hacking
Works about cybercrime |
349136 | https://en.wikipedia.org/wiki/Gordon%20Bell | Gordon Bell | Chester Gordon Bell (born August 19, 1934) is an American electrical engineer and manager. An early employee of Digital Equipment Corporation (DEC) 1960–1966, Bell designed several of their PDP machines and later became Vice President of Engineering 1972–1983, overseeing the development of the VAX. Bell's later career includes entrepreneur, investor, founding Assistant Director of NSF's Computing and Information Science and Engineering Directorate 1986–1987, and researcher emeritus at Microsoft Research, 1995–2015.
Early life and education
Gordon Bell was born in Kirksville, Missouri. He grew up helping with the family business, Bell Electric, repairing appliances and wiring homes.
Bell received a B.S. (1956), and M.S. (1957) in electrical engineering from MIT. He then went to the New South Wales University of Technology (now UNSW) in Australia on a Fulbright Scholarship, where he taught classes on computer design, programmed one of the first computers to arrive in Australia (called UTECOM, an English Electric DEUCE) and published his first academic paper. Returning to the U.S., he worked in the MIT Speech Computation Laboratory under Professor Ken Stevens, where he wrote the first Analysis by Synthesis program.
Career
Digital Equipment Corporation
The DEC founders Ken Olsen and Harlan Anderson recruited him for their new company in 1960, where he designed the I/O subsystem of the PDP-1, including the first UART. Bell was the architect of the PDP-4, and PDP-6. Other architectural contributions were to the PDP-5 and PDP-11 Unibus and General Registers architecture.
After DEC, Bell went to Carnegie Mellon University in 1966 to teach computer science, but returned to DEC in 1972 as vice-president of engineering, where he was in charge of the VAX, DEC's most successful computer.
Entrepreneur and policy advisor
Bell retired from DEC in 1983 as the result of a heart attack, but soon after founded Encore Computer, one of the first shared memory, multiple-microprocessor computers to use the snooping cache structure.
During the 1980s he became involved with public policy, becoming the first and founding Assistant Director of the CISE Directorate of the NSF, and led the cross-agency group that specified the NREN.
Bell also established the ACM Gordon Bell Prize (administered by the ACM and IEEE) in 1987 to encourage development in parallel processing. The first Gordon Bell Prize was won by researchers at the Parallel Processing Division of Sandia National Laboratory for work done on the 1000-processor nCUBE 10 hypercube.
He was a founding member of Ardent Computer in 1986, becoming VP of R&D 1988, and remained until it merged with Stellar in 1989, to become Stardent Computer.
Microsoft Research
Between 1991 and 1995, Bell advised Microsoft in its efforts to start a research group, then joined it full-time in August 1995, studying telepresence and related ideas. He is the experiment subject for the MyLifeBits project, an experiment in life-logging (not the same as life-blogging) and an attempt to fulfill Vannevar Bush's vision of an automated store of the documents, pictures (including those taken automatically), and sounds an individual has experienced in his lifetime, to be accessed with speed and ease. For this, Bell has digitized all documents he has read or produced, CDs, emails, and so on. He continues to do so, gathering web pages browsed, phone and instant messaging conversations and the like more or less automatically.
Honors
Bell was elected a member of the National Academy of Engineering in 1977 for contributions to the architecture of minicomputers. He is also a Fellow of the American Academy of Arts and Sciences (1994), American Association for the Advancement of Science (1983), Association for Computing Machinery (1994), IEEE (1974), and member of the National Academy of Sciences (2007), and Fellow of the Australian Academy of Technological Sciences and Engineering (2009).
He is also a member of the advisory board of TTI/Vanguard and a former member of the Sector Advisory Committee of Australia's Information and Communication Technology Division of the Commonwealth Scientific and Industrial Research Organisation.
Bell was the first recipient of the IEEE John von Neumann Medal, in 1992. His other awards include Fellow of the Computer History Museum, honorary D. Eng. from WPI, the AeA Inventor Award, the Vladimir Karapetoff Outstanding Technical Achievement Award of Eta Kappa Nu, and the 1991 National Medal of Technology by President George H.W. Bush. He was also named an Eta Kappa Nu Eminent Member in 2007.
In 1993, Worcester Polytechnic Institute awarded Bell an Honorary Doctor of Engineering, and in 2010, Bell received an honorary Doctor of Science and Technology degree from Carnegie Mellon University. The university referred to him as "the father of the minicomputer".
Bell co-founded The Computer Museum, Boston, Massachusetts, with his wife Gwen Bell in 1979. He was a founding board member of its successor, the Computer History Museum, Mountain View, California. In 2003, he was made a Fellow of the Museum "for his key role in the minicomputer revolution, and for contributions as a computer architect and entrepreneur." The story of the museum's evolution beginning in the early 1970s with Ken Olsen at Digital Equipment Corporation is described in the Microsoft Technical Report MSR-TR-2011-44, "Out of a Closet: The Early Years of The Computer [x]* Museum". A timeline of computing historical machines, events, and people is given on his website. It covers from B.C. to the present.
Bell's law of computer classes
Bell's law of computer classes was first described in 1972 with the emergence of a new, lower priced microcomputer class based on the microprocessor. Established market class computers are introduced at a constant price with increasing functionality and performance. Technology advances in semiconductors, storage, interfaces and networks enable a new computer class (platform) to form about every decade to serve a new need. Each new usually lower priced class is maintained as a quasi independent industry (market). Classes include: mainframes (1960s), minicomputers (1970s), networked workstations and personal computers (1980s), browser-web-server structure (1990s), palm computing (1995), web services (2000s), convergence of cell phones and computers (2003), and Wireless Sensor Networks aka motes (2004). Bell predicted that home and body area networks would form by 2010.
Books
(with Allen Newell) Computer Structures: Readings and Examples (1971, )
(with C. Mudge and J. McNamara) Computer Engineering (1978, )
(with Dan Siewiorek and Allen Newell) Computer Structures: Readings and Examples (1982, )
(with J. McNamara) High Tech Ventures: The Guide for Entrepreneurial Success (1991, )
(with Jim Gemmell) Total Recall: How the E-Memory Revolution will Change Everything (2009, )
(with Jim Gemmell) Your Life Uploaded: The Digital Way to Better Memory, Health, and Productivity (2010, )
See also
MyLifeBits
Microsoft SenseCam
Lifelog
References
Further reading
Wilkinson, Alec, "Remember This?" The New Yorker, 28 May 2007, pp. 38–44.
External links
CBS Evening News video interview on the MyLifeBits Project, 2007.
American computer scientists
Computer designers
Computer hardware engineers
1934 births
Living people
Carnegie Mellon University faculty
Digital Equipment Corporation people
Fellow Members of the IEEE
Fellows of the American Academy of Arts and Sciences
Fellows of the Association for Computing Machinery
MIT School of Engineering alumni
Microsoft employees
Microsoft Research people
National Medal of Technology recipients
People from Kirksville, Missouri
Members of the United States National Academy of Engineering
Fellows of the Australian Academy of Technological Sciences and Engineering
Members of the United States National Academy of Sciences
20th-century American scientists
21st-century American scientists
Silicon Valley people |
3292564 | https://en.wikipedia.org/wiki/Mathematical%20Applications%20Group | Mathematical Applications Group | Mathematical Applications Group, Inc. (a.k.a. MAGi or MAGi/SynthaVision) was an early computer technology company founded in 1966 by Dr. Philip Mittelman and located in Elmsford, New York, where it was evaluating nuclear radiation exposure. By modeling structures using combinatorial geometry mathematics and applying monte carlo radiation ray tracing techniques, the mathematicians could estimate exposures at various distances and relative locations in and around fictional structures. In 1972, the graphics group called MAGi/SynthaVision was formed at MAGi by Robert Goldstein.
It was one of four companies hired to create the CGI animation for the film Tron. MAGi was responsible for most of the CGI animation in the first half of Tron, while Triple-I worked mainly on the second half of the film. MAGi modeled and animated the light cycles, recognizers and tanks.
Product and legacy
MAGi developed a software program called SynthaVision to create CGI images and films. SynthaVision was one of the first systems to implement a ray tracing algorithmic approach to hidden surface removal in rendering images. The software was a constructive solid geometry (CSG) system, in that the geometry was solid primitives with combinatorial operators (such as Boolean operators). SynthaVision's modeling method does not use polygons or wireframe meshes that most CGI companies use today. The combination of the solid modeling and ray tracing (later to become plane firing) made it a very robust system that could generate high quality images.
MAGi created the world's first CGI advertisement for IBM. It featured 3D letters that flew out of an office machine.
History
In 1972, MAGi/SynthaVision was started by Robert Goldstein, with Bo Gehring and Larry Elin covering the design and film/television interests, respectively.
Two of the first television commercial applications were storyboarded by Texas artist, Gordon Blocker in 1973-4 for the Texas Commerce Bank "Flag Card" commercial and a news open for KHOU-TV (CBS) in Houston, Texas.
Tron
In 1981, MAGi was hired by Disney to create half of the majority of the 20 minutes of CGI needed for the film Tron. Twenty minutes of CGI animation, in the early 1980s, was extremely gutsy, and so MAGi was a portion of the CGI animation, while other companies were hired to do the other animation shots. Since SynthaVision was easy to animate and could create fluid motion and movement, MAGi was assigned with most of Tron's action sequences. These classic scenes include the light cycle sequence and Clu's tank and recognizer pursuit scene. Despite the high quality images that SynthaVision was able to create, the CSG solid modeling could not create anything with complex shapes and multiple curves, so simpler objects like the light cycles and tanks were assigned to MAGi. MAGi was given $1.2 million to finance the animation needed for Tron. MAGi needed more R&D and many other engineers who were working in government contracts at MAGi were assigned into MAGi's "SynthaVision" division.
MAGi sped up the process of supplying its work to Disney Studios in Burbank by a transcontinental computer hook-up. Before each scene was finalized in MAGi's lab in Elmsford, New York, it was previewed on a computer monitor at Disney. Corrections could then be made in the scene immediately. Previously, the only way of previewing the scene was to film it, ship it to Burbank, get corrections made, ship it back to Elmsford, and continue this "ping-ponging" until the scene was correct. The computer link cut between two-and-a-half to five days from the creation of each scene.
During the production of Tron, animators and computer image choreographers Bill Kroyer and Jerry Rees invited John Lasseter (who would later co-found Pixar) to see some of the light cycle animation. Lasseter said in "The Making of Tron" featurette that the light cycle animation was the first CGI animation he had ever seen.
After Tron
In 1983, Disney commissioned MAGi to create a test film featuring characters from the children's book Where the Wild Things Are. The Wild Things test used CGI animation for the backgrounds and traditional 2D animation for the characters "Max" and his dog. Animators John Lasseter and Glen Keane of Disney directed the test for Disney. At MAGi, Larry Elin directed Chris Wedge and Jan Carle and produced a 3D background pencil test based on Disney's story animatics. Lasseter and Keane at Disney then hand animated over the CG background wireframes. A tight bi-coastal production loop was designed. MAGi programmer Josh Pines developed film scanning software to digitized and cleanup the final hand drawn character film footage from Disney. The scanning software adapted to produce cleaner digitized images. Concurrently an ink and paint system was written by Christine Chang, Jodi Slater and Ken Perlin for production. This early paint system would fill in color within character line borders, apply shadow, highlight and a blur to the color areas in order to produce an airbrush 2 1/2 D effect. The final painted characters and CG rendered backgrounds were digitally composited, color corrected and light scanned back onto film with a Celco camera for lab processing and delivery back to Disney.
In 1984, MAGi opened an office in Los Angeles, California. This office was headed by Richard Taylor, who worked as Special Effects Supervisor while at Triple-I. Taylor, Wedge and Carle directed a test for a Disney film Something Wicked This Way Comes. The software and computing hardware proved insufficient for the proposed animation and effects. The Los Angeles office was closed shortly after its establishment.
Also in 1984 Michael Ferraro and Tom Bisogno began production on a short film “First Flight” for the SIGGRAPH '84 Electronic Theater. In order to achieve the organic textures such as clouds, water and bark envisioned, they proposed an artists programming language (KPL) to Ken Perlin to use for the production. Perlin and Josh Pines finalized revision 1 of KPL in time to be used for some effects on the film. KPL was extremely fast since it utilized a reverse polish stack computation method. Carl Ludwig would later use KPL to great effect on ocean cycloid images and realistic cloud formations. Perlin noise and organic procedural textures were also created by Ken Perlin as early built-in image functions for the KPL programming language.
Much of the MAGi/SynthaVision software was Fortran-based with a Ratfor interface for the artists to use. In 1985 Josh Pines argued to use the Unix programming environment for any future software and production programming design. Michael Ferraro, Carl Ludwig and Tom Bisogno began initial design of an open CG animators programming environment with a C-like interface ( Hoc ) for the artists and procedural functionality like Perlin's KPL.
Soon after, the SynthaVision software was sold to Lockheed's (CADAM) division as the foundation of ISD (Interactive Solids Design) and MAGi was formally sold to a Canadian firm, Vidmax (which later went defunct), and many of the employees left to other CGI companies and universities.
Phillip Mittelman, the founder of MAGi, died in 2000.
MAGi staff (1975–1985)
Dr. Phil Mittleman
Bo Gehring
Robert Goldstein
Harold S. Schechter
Larry Elin
Marty Cohen
Herb Steinberg
Dr. Eugene Troubetzkoy
Ken Perlin
Evan Laski
Chris Wedge
Tom Bisogno
Carl Ludwig
Jan Carlee
Gene Miller
Josh Pines
Christine Chang
Elyse Veintrub
Kevin Egan
Paul Harris
Richard Taylor
Tom Miller
David Brown
Mike Ferraro
Alison Brown
John Beach
Glenn Alsup
J.A. Lopez
References
Software companies established in 1966
Computer animation
Visual effects companies
1966 establishments in New York (state) |
500163 | https://en.wikipedia.org/wiki/Self-replicating%20spacecraft | Self-replicating spacecraft | The idea of self-replicating spacecraft has been applied – in theory – to several distinct "tasks". The particular variant of this idea applied to the idea of space exploration is known as a von Neumann probe after mathematician John von Neumann, who originally conceived of them. Other variants include the Berserker and an automated terraforming seeder ship.
Theory
Von Neumann proved that the most effective way of performing large-scale mining operations such as mining an entire moon or asteroid belt would be by self-replicating spacecraft, taking advantage of their exponential growth. In theory, a self-replicating spacecraft could be sent to a neighbouring planetary system, where it would seek out raw materials (extracted from asteroids, moons, gas giants, etc.) to create replicas of itself. These replicas would then be sent out to other planetary systems. The original "parent" probe could then pursue its primary purpose within the star system. This mission varies widely depending on the variant of self-replicating starship proposed.
Given this pattern, and its similarity to the reproduction patterns of bacteria, it has been pointed out that von Neumann machines might be considered a form of life. In his short story "Lungfish" (see Self-replicating machines in fiction), David Brin touches on this idea, pointing out that self-replicating machines launched by different species might actually compete with one another (in a Darwinistic fashion) for raw material, or even have conflicting missions. Given enough variety of "species" they might even form a type of ecology, or – should they also have a form of artificial intelligence – a society. They may even mutate with untold thousands of "generations".
The first quantitative engineering analysis of such a spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory produce many copies of itself there to increase its total manufacturing capacity over a 500-year period, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.
It has been theorized that a self-replicating starship utilizing relatively conventional theoretical methods of interstellar travel (i.e., no exotic faster-than-light propulsion, and speeds limited to an "average cruising speed" of 0.1c.) could spread throughout a galaxy the size of the Milky Way in as little as half a million years.
Implications for Fermi's paradox
In 1981, Frank Tipler put forth an argument that extraterrestrial intelligences do not exist, based on the fact that von Neumann probes have not been observed. Given even a moderate rate of replication and the history of the galaxy, such probes should already be common throughout space and thus, we should have already encountered them. Because we have not, this shows that extraterrestrial intelligences do not exist. This is thus a resolution to the Fermi paradox – that is, the question of why we have not already encountered extraterrestrial intelligence if it is common throughout the universe.
A response came from Carl Sagan and William Newman. Now known as Sagan's Response, it pointed out that in fact Tipler had underestimated the rate of replication, and that von Neumann probes should have already started to consume most of the mass in the galaxy. Any intelligent race would therefore, Sagan and Newman reasoned, not design von Neumann probes in the first place, and would try to destroy any von Neumann probes found as soon as they were detected. As Robert Freitas has pointed out, the assumed capacity of von Neumann probes described by both sides of the debate is unlikely in reality, and more modestly reproducing systems are unlikely to be observable in their effects on our solar system or the galaxy as a whole.
Another objection to the prevalence of von Neumann probes is that civilizations of the type that could potentially create such devices may have inherently short lifetimes, and self-destruct before so advanced a stage is reached, through such events as biological or nuclear warfare, nanoterrorism, resource exhaustion, ecological catastrophe, or pandemics.
Simple workarounds exist to avoid the over-replication scenario. Radio transmitters, or other means of wireless communication, could be used by probes programmed not to replicate beyond a certain density (such as five probes per cubic parsec) or arbitrary limit (such as ten million within one century), analogous to the Hayflick limit in cell reproduction. One problem with this defence against uncontrolled replication is that it would only require a single probe to malfunction and begin unrestricted reproduction for the entire approach to fail – essentially a technological cancer – unless each probe also has the ability to detect such malfunction in its neighbours and implements a seek and destroy protocol (which in turn could lead to probe-on-probe space wars if faulty probes first managed to multiply to high numbers before they were found by sound ones, which could then well have programming to replicate to matching numbers so as to manage the infestation). Another workaround is based on the need for spacecraft heating during long interstellar travel. The use of plutonium as a thermal source would limit the ability to self-replicate. The spacecraft would have no programming to make more plutonium even if it found the required raw materials. Another is to program the spacecraft with a clear understanding of the dangers of uncontrolled replication.
Applications for self-replicating spacecraft
The details of the mission of self-replicating starships can vary widely from proposal to proposal, and the only common trait is the self-replicating nature.
Von Neumann probes
A von Neumann probe is a spacecraft capable of replicating itself. It is a concatenation of two concepts: a "Von Neumann universal constructor" (self-replicating machine) and a probe (an instrument to explore or examine something). The concept is named after Hungarian American mathematician and physicist John von Neumann, who rigorously studied the concept of self-replicating machines that he called "Universal Assemblers" and which are often referred to as "von Neumann machines". Such constructs could be theorised to comprise five basic components (variations of this template could create other machines such as Bracewell probes):
Probe: which would contain the actual probing instruments & goal-directed AI to guide the construct.
Life-support systems: mechanisms to repair and maintain the construct.
Factory: mechanisms to harvest resources & replicate itself.
Memory banks: store programs for all its components & information gained by the probe.
Engine: motor to move the probe.
Andreas Hein and science fiction author Stephen Baxter proposed different types of von Neumann probes, termed "Philosopher" and "Founder", where the purpose of the former is exploration and for the latter preparing future settlement.
A near-term concept of a self-replicating probe has been proposed by the Initiative for Interstellar Studies, achieving about 70% self-replication, based on current and near-term technologies.
If a self-replicating probe finds evidence of primitive life (or a primitive, low-level culture) it might be programmed to lie dormant, silently observe, attempt to make contact (this variant is known as a Bracewell probe), or even interfere with or guide the evolution of life in some way.
Physicist Paul Davies of Arizona State University has raised the possibility of a probe resting on our own Moon, having arrived at some point in Earth's ancient prehistory and remained to monitor Earth, which is reminiscent of Arthur C. Clarke's "The Sentinel" and the Stanley Kubrick film 2001: A Space Odyssey that was based on Clarke's story.
A variant idea on the interstellar von Neumann probe idea is that of the "Astrochicken", proposed by Freeman Dyson. While it has the common traits of self-replication, exploration, and communication with its "home base", Dyson conceived the Astrochicken to explore and operate within our own planetary system, and not explore interstellar space.
Anders Sandberg and Stuart Armstrong argued that launching the colonization of the entire reachable universe through self-replicating probes is well within the capabilities of a star-spanning civilization, and proposed a theoretical approach for achieving it in 32 years, by mining planet Mercury for resources and constructing a Dyson Swarm around the Sun.
Berserkers
A variant of the self-replicating starship is the Berserker. Unlike the benign probe concept, Berserkers are programmed to seek out and exterminate lifeforms and life-bearing exoplanets whenever they are encountered.
The name is derived from the Berserker series of novels by Fred Saberhagen which describes a war between humanity and such machines. Saberhagen points out (through one of his characters) that the Berserker warships in his novels are not von Neumann machines themselves, but the larger complex of Berserker machines – including automated shipyards – do constitute a von Neumann machine. This again brings up the concept of an ecology of von Neumann machines, or even a von Neumann hive entity.
It is speculated in fiction that Berserkers could be created and launched by a xenophobic civilization (see Anvil of Stars, by Greg Bear, in the section In fiction below) or could theoretically "mutate" from a more benign probe. For instance, a von Neumann ship designed for terraforming processes – mining a planet's surface and adjusting its atmosphere to more human-friendly conditions – could be interpreted as attacking previously-inhabited planets, killing their inhabitants in the process of changing the planetary environment, and then self-replicating to dispatch more ships to 'attack' other planets.
Replicating seeder ships
Yet another variant on the idea of the self-replicating starship is that of the seeder ship. Such starships might store the genetic patterns of lifeforms from their home world, perhaps even of the species which created it. Upon finding a habitable exoplanet, or even one that might be terraformed, it would try to replicate such lifeforms – either from stored embryos or from stored information using molecular nanotechnology to build zygotes with varying genetic information from local raw materials.
Such ships might be terraforming vessels, preparing colony worlds for later colonization by other vessels, or – should they be programmed to recreate, raise, and educate individuals of the species that created it – self-replicating colonizers themselves. Seeder ships would be a suitable alternative to generation ships as a way to colonize worlds too distant to travel to in one lifetime.
In fiction
Von Neumann probes
2001: A Space Odyssey: The monoliths in Arthur C. Clarke's book and Stanley Kubrick's film 2001: A Space Odyssey were intended to be self-replicating probes, though the artifacts in "The Sentinel", Clarke's original short story upon which 2001 was based, were not. The film was to begin with a series of scientists explaining how probes like these would be the most efficient method of exploring outer space. Kubrick cut the opening segment from his film at the last minute, however, and these monoliths became almost mystical entities in both the film and Clarke's novel.
Cold As Ice: In the novel by Charles Sheffield, there is a segment where the author (a physicist) describes Von Neumann machines harvesting sulfur, nitrogen, phosphorus, helium-4, and various metals from the atmosphere of Jupiter.
Destiny's Road: Larry Niven frequently refers to Von Neumann probes in many of his works. In his 1998 book Destiny's Road, Von Neumann machines are scattered throughout the human colony world Destiny and its moon Quicksilver in order to build and maintain technology and to make up for the lack of the resident humans' technical knowledge; the Von Neumann machines primarily construct a stretchable fabric cloth capable of acting as a solar collector which serves as the humans' primary energy source. The Von Neumann machines also engage in ecological maintenance and other exploratory work.
The Devil's Blind Spot: See also Alexander Kluge, The Devil's Blind Spot (New Directions; 2004.)
Grey Goo: In the video game Grey Goo, the "Goo" faction is composed entirely of Von Neumann probes sent through various microscopic wormholes to map the Milky Way Galaxy. The faction's units are configurations of nanites used during their original mission of exploration, which have adapted to a combat role. The Goo starts as an antagonist to the Human and Beta factions, but their true objective is revealed during their portion of the single-player campaign. Related to, and inspired by, the Grey Goo doomsday scenario.
Spin: In the novel by Robert Charles Wilson, Earth is veiled by a temporal field. Humanity tries to understand and escape this field by using Von Neumann probes. It is later revealed that the field itself was generated by Von Neumann probes from another civilization, and that a competition for resources had taken place between earth's and the aliens' probes.
The Third Millennium: A History of the World AD 2000–3000: In the book by Brian Stableford and David Langford (published by Alfred A. Knopf, Inc., 1985) humanity sends cycle-limited Von Neumann probes out to the nearest stars to do open-ended exploration and to announce humanity's existence to whoever might encounter them.
Von Neumann's War: In Von Neumann's War by John Ringo and Travis S. Taylor (published by Baen Books in 2007) Von Neumann probes arrive in the solar system, moving in from the outer planets, and converting all metals into gigantic structures. Eventually, they arrive on Earth, wiping out much of the population before being beaten back when humanity reverse engineers some of the probes.
We Are Legion (We Are Bob) by Dennis E. Taylor: Bob Johansson, the former owner of a software company, dies in a car accident, only to wake up a hundred years later as a computer emulation of Bob. Given a Von Neumann probe by America's religious government, he is sent out to explore, exploit, expand, and experiment for the good of the human race.
ARMA 3: In the "First Contact" single-player campaign introduced in the Contact expansion, a series of extraterrestrial network structures are found in various locations on Earth, one being the fictional country of Livonia, the campaign's setting. In the credits of the campaign, a radio broadcast reveals that a popular theory surrounding the networks is that they are a type of Von Neumann probe that arrived on Earth during the time of a supercontinent.
Questionable Content: In Jeph Jacques' webcomic, Faye Whitaker refers to the "Floating Black Slab Emitting A Low Hum" as a possible Von Neumann probe in Episode 4645: Accessorized.
Berserkers
In the science fiction short story collection Berserker by Fred Saberhagen, a series of short stories include accounts of battles fought against extremely destructive Berserker machines. This and subsequent books set in the same fictional universe are the origin of the term "Berserker probe".
In the 2003 miniseries reboot of Battlestar Galactica (and the subsequent 2004 series) the Cylons are similar to Berserkers in their wish to destroy human life. They were created by humans in a group of fictional planets called the Twelve Colonies. The Cylons created special models that look like humans in order to destroy the twelve colonies and later, the fleeing fleet of surviving humans.
The Borg of Star Trek – a self-replicating bio-mechanical race that is dedicated to the task of achieving perfection through the assimilation of useful technology and lifeforms. Their ships are massive mechanical cubes (a close step from the Berserker's massive mechanical Spheres).
Science fiction author Larry Niven later borrowed this notion in his short story "A Teardrop Falls".
In the computer game Star Control II, the Slylandro Probe is an out-of-control self-replicating probe that attacks starships of other races. They were not originally intended to be a berserker probe; they sought out intelligent life for peaceful contact, but due to a programming error, they would immediately switch to "resource extraction" mode and attempt to dismantle the target ship for raw materials. While the plot claims that the probes reproduce "at a geometric rate", the game itself caps the frequency of encountering these probes. It is possible to deal with the menace in a side-quest, but this is not necessary to complete the game, as the probes only appear one at a time, and the player's ship will eventually be fast and powerful enough to outrun them or destroy them for resources – although the probes will eventually dominate the entire game universe.
In Iain Banks' novel Excession, hegemonising swarms are described as a form of Outside Context Problem. An example of an "Aggressive Hegemonising Swarm Object" is given as an uncontrolled self-replicating probe with the goal of turning all matter into copies of itself. After causing great damage, they are somehow transformed using unspecified techniques by the Zetetic Elench and become "Evangelical Hegemonising Swarm Objects". Such swarms (referred to as "smatter") reappear in the later novels Surface Detail (which features scenes of space combat against the swarms) and The Hydrogen Sonata.
The Inhibitors from Alastair Reynolds' Revelation Space series are self-replicating machines whose purpose is to inhibit the development of intelligent star-faring cultures. They are dormant for extreme periods of time until they detect the presence of a space-faring culture and proceed to exterminate it even to the point of sterilizing entire planets. They are very difficult to destroy as they seem to have faced every type of weapon ever devised and only need a short time to 'remember' the necessary counter-measures.
Also from Alastair Reynolds' books, the "Greenfly" terraforming machines are another form of berserker machines. For unknown reasons, but probably an error in their programming, they destroy planets and turn them into trillions of domes filled with vegetation – after all, their purpose is to produce a habitable environment for humans, however in doing so they inadvertently decimate the human race. By 10,000, they have wiped out most of the Galaxy.
The Reapers in the video game series Mass Effect are also self-replicating probes bent on destroying any advanced civilization encountered in the galaxy. They lie dormant in the vast spaces between the galaxies and follow a cycle of extermination. It is seen in Mass Effect 2 that they assimilate any advanced species.
Mantrid Drones from the science fiction television series Lexx were an extremely aggressive type of self-replicating Berserker machine, eventually converting the majority of the matter in the universe into copies of themselves in the course of their quest to thoroughly exterminate humanity.
The Babylon 5 episode "Infection" showed a smaller scale berserker in the form of the Icarran War Machine. After being created with the goal of defeating an unspecified enemy faction, the War Machines proceeded to exterminate all life on the planet Icarra VII because they had been programmed with standards for what constituted a 'Pure Icaran' based on religious teachings, which no actual Icaran could satisfy. Because the Icaran were pre-starflight, the War Machines became dormant after completing their task rather than spreading. One unit was reactivated on-board Babylon 5 after being smuggled past quarantine by an unscrupulous archaeologist, but after being confronted with how they had rendered Icara VII a dead world, the simulated personality of the War Machine committed suicide.
The Babylon 5 episode "A Day in the Strife" features a probe that threatens the station with destruction unless a series of questions designed to test a civilization's level of advancement are answered correctly. The commander of the station correctly surmises that the probe is actually a berserker and that if the questions are answered the probe would identify them as a threat to its originating civilization and detonate.
Greg Bear's novel The Forge of God deals directly with the concept of "Berserker" von Neumann probes and their consequences. The idea is further explored in the novel's sequel, Anvil of Stars, which explores the reaction other civilizations have to the creation and release of Berserkers.
In Gregory Benford's Galactic Center Saga series, an antagonist berserker machine race is encountered by Earth, first as a probe in In the Ocean of Night, and then in an attack in Across the Sea of Suns. The berserker machines do not seek to completely eradicate a race if merely throwing it into a primitive low technological state will do as they did to the EMs encountered in Across the Sea of Suns. The alien machine Watchers would not be considered von Neumann machines themselves, but the collective machine race could.
On Stargate SG-1 the Replicators were a vicious race of insect-like robots that were originally created by an android named Reese to serve as toys. They grew beyond her control and began evolving, eventually spreading throughout at least two galaxies. In addition to ordinary autonomous evolution they were able to analyze and incorporate new technologies they encountered into themselves, ultimately making them one of the most advanced "races" known.
On Stargate Atlantis, a second race of replicators created by the Ancients were encountered in the Pegasus Galaxy. They were created as a means to defeat the Wraith. The Ancients attempted to destroy them after they began showing signs of sentience and requested that their drive to kill the wraith be removed. This failed, and an unspecified length of time after the Ancients retreated to the Milky Way Galaxy, the replicators nearly succeeded in destroying the Wraith. The Wraith were able to hack into the replicators and deactivate the extermination drive, at which point they retreated to their home world and were not heard from again until encountered by the Atlantis Expedition. After the Atlantis Expedition reactivated this dormant directive, the replicators embarked on a plan to kill the Wraith by removing their food source, i.e. all humans in the Pegasus Galaxy.
In Stargate Universe Season 2, a galaxy billions of light years distant from the Milky Way is infested with drone ships that are programmed to annihilate intelligent life and advanced technology. The drone ships attack other space ships (including Destiny) as well as humans on planetary surfaces, but don't bother destroying primitive technology such as buildings unless they are harboring intelligent life or advanced technology.
In the Justice League Unlimited episode "Dark Heart", an alien weapon based on this same idea lands on Earth.
In the Homeworld: Cataclysm video game, a bio-mechanical virus called Beast has the ability to alter organic and mechanic material to suit its needs, and the ships infected become self-replicating hubs for the virus.
In the SF MMO EVE Online, experiments to create more autonomous drones than the ones used by player's ships accidentally created 'rogue drones' which form hives in certain parts of space and are used extensively in missions as difficult opponents.
In the computer game Sword of the Stars, the player may randomly encounter "Von Neumann". A Von Neumann mothership appears along with smaller Von Neumann probes, which attack and consume the player's ships. The probes then return to the mothership, returning the consumed material. If probes are destroyed, the mothership will create new ones. If all the player's ships are destroyed, the Von Neumann probes will reduce the planets resource levels before leaving. The probes appear as blue octahedrons, with small spheres attached to the apical points. The mothership is a larger version of the probes. In the 2008 expansion A Murder of Crows, Kerberos Productions also introduces the VN Berserker, a combat oriented ship, which attacks player planets and ships in retaliation to violence against VN Motherships. If the player destroys the Berserker things will escalate and a System Destroyer will attack.
In the X Computer Game Series, the Xenon are a malevolent race of artificially intelligent machines descended from terraforming ships sent out by humans to prepare worlds for eventual colonization; the result caused by a bugged software update. They are continual antagonists in the X-Universe.
In the comic Transmetropolitan a character mentions "Von Neumann rectal infestations" which are apparently caused by "Shit-ticks that build more shit-ticks that build more shit-ticks".
In the anime Vandread, harvester ships attack vessels from both male- and female-dominated factions and harvest hull, reactors, and computer components to make more of themselves. To this end, Harvester ships are built around mobile factories. Earth-born humans also view the inhabitants of the various colonies to be little more than spare parts.
In Earth 2160, the Morphidian Aliens rely on strain aliens for colonization. Most -derived aliens can absorb water, then reproduce like a colony of cells. In this manner, even one Lady (or Princess, or Queen) can create enough clones to cover the map. Once they have significant numbers, they "choose an evolutionary path" and swarm the enemy, taking over their resources.
In the European comic series Storm, numbers 20 & 21, a kind of berserk von Neumann probe is set on a collision course with the Pandarve system.
In PC role-playing game Space Rangers and its sequel Space Rangers 2: Dominators, a league of 5 nations battles three different types of Berserker robots. One that focuses on invading planets, another that battles normal space and third that lives in hyperspace.
In the Star Wolves video game series, Berserkers are a self-replicating machine menace that threatens the known universe for purposes of destruction and/or assimilation of humanity.
The Star Wars expanded universe features the World Devastators, large ships designed and built by the Galactic Empire that tear apart planets to use its materials to build other ships or even upgrade or replicate themselves.
The Tet in the 2013 film Oblivion is revealed to be a Berserker of sorts: a sentient machine that travels from planet to planet, exterminating the indigenous population using armies of robotic drones and cloned members of the target species. The Tet then proceeds to harvest the planet's water in order to extract hydrogen for nuclear fusion.
In Eclipse Phase, an ETI probe is believed to have infected the TITAN computer systems with the Exsurgent virus to cause them to go berserk and wage war on humanity. This would make ETI probes a form of berserker, albeit one that uses pre-existing computer systems as its key weapons.
In Herr aller Dinge by Andreas Eschbach, an ancient nano machine complex is discovered buried in a glacier off the coast of Russia. When it comes in contact with materials it needs to fulfill its mission, it creates a launch facility and launches a space craft. It is later revealed that the nano machines were created by a pre-historic human race with the intention of destroying other interstellar civilizations (for an unknown reason). It is purposed that the reason there is no evidence of the race is because of the nano-machines themselves and their ability to manipulate matter at an atomic level. It is even suggested that viruses could be ancient nano machines that have evolved over time.
Replicating seeder ships
Code of the Lifemaker by James P. Hogan describes the evolution of a society of humanoid-like robots who inhabit Saturn's moon Titan. The sentient machines are descended from an unmanned factory ship that was to be self replicating, but suffered radiation damage and went off course, eventually landing on Titan around 1,000,000 BC.
Manifold: Space, Stephen Baxter's novel, starts with the discovery of alien self-replicating machines active within the Solar system.
In the Metroid Prime subseries of games, the massive Leviathans are probes routinely sent out from the planet Phaaze to infect other planets with Phazon radiation and eventually turn these planets into clones of Phaaze, where the self-replication process can continue.
In David Brin's short story collection, The River of Time (1986), the short story "Lungfish" prominently features von Neumann probes. Not only does he explore the concept of the probes themselves, but indirectly explores the ideas of competition between different designs of probes, evolution of von Neumann probes in the face of such competition, and the development of a type of ecology between von Neumann probes. One of the vessels mentioned is clearly a Seeder type.
In The Songs of Distant Earth by Arthur C. Clarke, humanity on a future Earth facing imminent destruction creates automated seedships that act as fire and forget lifeboats aimed at distant, habitable worlds. Upon landing, the ship begins to create new humans from stored genetic information, and an onboard computer system raises and trains the first few generations of new inhabitants. The massive ships are then broken down and used as building materials by their "children".
On the Stargate Atlantis episode "Remnants", the Atlantis team finds an ancient probe that they later learn was launched by a now-extinct, technologically advanced race in order to seed new worlds and re-propagate their silicon-based species. The probe communicated with inhabitants of Atlantis by means of hallucinations.
On the Stargate SG-1 episode "Scorched Earth", a species of newly relocated humanoids face extinction via an automated terraforming colony seeder ship controlled by an Artificial Intelligence.
On Stargate Universe, the human adventurers live on a ship called Destiny. Its mission was to connect a network of Stargates, placed by preceding seeder ships on planets capable of supporting life to allow instantaneous travel between them.
The trilogy of albums which conclude the comic book series Storm by Don Lawrence (starting with Chronicles of Pandarve 11: The Von Neumann machine) is based on self-replicating conscious machines containing the sum of all human knowledge employed to rebuild human society throughout the universe in case of disaster on Earth. The probe malfunctions and although new probes are built, they do not separate from the motherprobe, which eventually results in a cluster of malfunctioning probes so big that it can absorb entire moons.
In the Xeno series, a rogue seeder ship (technically a berserker) known as "Deus" created humanity.
See also
Asteroid mining
Astrochicken
Bracewell probe
Embryo space colonization
Generation ship
Interstellar ark
Interstellar travel
Self-replicating machine
Sleeper ship
Space colonization
Transcension hypothesis
References
Boyce, Chris. Extraterrestrial Encounter: A Personal Perspective. London: David & Charles, Newton Abbot (1979).
von Tiesenhausen, G., and Darbro, W. A. "Self-Replicating Systems," NASA Technical Memorandum 78304. Washington, D.C.: National Aeronautics and Space Administration (1980).
Freitas Jr., Robert A. "A Self-Reproducing Interstellar Probe," Journal of the British Interplanetary Society, 33, 251–264 (1980). rfreitas.com, also molecularassembler.com
Valdes, F., and Freitas, R. A. "Comparison of Reproducing and Non-Reproducing Starprobe Strategies for Galactic Exploration," Journal of the British Interplanetary Society, 33, 402–408 (1980). rfreitas.com
Artificial life
Fictional spacecraft by type
Hypothetical spacecraft
Self-replicating machines |
706602 | https://en.wikipedia.org/wiki/Zombies%20Ate%20My%20Neighbors | Zombies Ate My Neighbors | Zombies Ate My Neighbors is a run and gun video game developed by LucasArts and published by Konami for the Super NES and Sega Mega Drive/Genesis consoles in 1993.
One or two players take control of protagonists, Zeke and Julie, in order to rescue the titular neighbors from monsters often seen in horror movies. Aiding them in this task are a variety of weapons and power-ups that can be used to battle the numerous enemies in each level. Various elements and aspects of horror movies are referenced in the game with some of its more violent content being censored in various territories such as Europe and Australia, where it is known only as Zombies.
While not a great commercial success, the game was well-received for its graphical style, humor, and deep gameplay. It spawned a sequel, Ghoul Patrol, released in 1994. Both games were re-released as part of Lucasfilm Classic Games: Zombies Ate My Neighbors and Ghoul Patrol for the Nintendo Switch, PlayStation 4, Xbox One and Windows in June 2021.
Gameplay
The mad scientist Dr. Tongue has created a wide variety of monsters within the bowels of his castle and has unleashed them on nearby suburban areas, terrorizing its inhabitants. Two teenage friends, Zeke and Julie, having witnessed the attack of said monsters, arm themselves with a great deal of unconventional weaponry and items to combat them and save their neighbors from certain death. Ultimately, they will come face to face with Dr. Tongue himself and defeat him to put an end to his plans.
The player can choose between Zeke and Julie, or play both in a two-player mode. They navigate suburban neighborhoods, shopping malls, pyramids, haunted castles, and other areas, destroying a variety of horror-movie monsters, including vampires, werewolves, huge demonic babies, spiders, squidmen, evil dolls, aliens, UFOs, giant ants, blobs, giant worms, mummies, chainsaw-wielding maniacs, "pod people" (aggressive alien clones of the players), and the game's namesake, zombies. In each of the 48 stages, which includes seven optional bonus levels, the players must rescue numerous types of neighbors, including barbecue chefs, teachers, babies, tourists, archeologists, soldiers, dogs, and cheerleaders. Once all neighbors on a level have been saved by the players touching them, a door opens that will take the player to the next stage.
All types of neighbors will be killed if an enemy touches them, preventing them from being saved for the remainder of the game or until an "Extra Bonus Victim" is awarded. On some levels, daytime gradually turns tonight. Upon nightfall, tourists transform into werewolves and cannot be saved; the game counts it as if they had been killed. At least one neighbor must be saved from each level to progress to the next. The game is lost if the players lose all of their lives or if all of the neighbors are killed. Scoring points earns players neighbors to save and extra lives. Each level has at most ten neighbors, and each neighbor type is worth a different number of points.
There are various items that the players can pick up along the way. These include keys that open up doors, health packs that restore health, and potions with various effects such as increasing speed or temporarily transforming the player into a powerful monster. Players can also collect various types of weapons, such as an Uzi water gun, bazookas, weed-whackers, explosive soda cans, ice pops, tomatoes, silverware, dishes, ancient crucifixes, flamethrowers, fire extinguishers and Martian bubble guns, each with their own effectiveness against certain types of enemies.
Development
Zombies Ate My Neighbors was originally developed by LucasArts. It was published by Konami, a company already known for platformers in 1993. Music for the game was composed by Joseph "Joe" McDermott. The game was developed on the Super Nintendo, before it was ported to the Sega Genesis about halfway through. The ZAMN engine would later be used for Ghoul Patrol, Metal Warriors and Big Sky Trooper. The developer wanted to include battery save in the game but was unable to as they could not afford it.
The monsters in the game are based on classic horror films released in the 1950s and more modern films like Friday the 13th and The Texas Chain Saw Massacre. Weapon effectiveness is also based on these depictions; werewolves die in one hit if attacked with silverware and vampires die faster if attacked with the crucifix. In the SNES version of the game, there's a flamethrower which is not included in the Sega Mega Drive version. The North American release was released with a variant cover art in limited quantities.
Release
The game was subject to some censorship. This game was released before the ESRB existed and before then, Nintendo did not want violence in their video games. Nintendo of America ordered all depictions of blood and gore be removed or changed to purple ooze. Censorship committees in several European Nations—i.e. United Kingdom, Ireland, Italy, France, Spain, Austria, Portugal, Finland, Denmark, Norway, Sweden, and Germany—censored more by having the game renamed to Zombies and ordered other changes including the replacement of the chainsaw-wielding enemies with lumberjacks wielding axes.
In October 2009, the Super NES version of Zombies Ate My Neighbors was re-released for the Wii Virtual Console.
In May 2021, the game and its sequel were announced for the Nintendo Switch, PlayStation 4, Xbox One and Windows, with a port developed by Dotemu and co-published by Disney Interactive and Lucasfilm Games. Lucasfilm Classic Games: Zombies Ate My Neighbors and Ghoul Patrol was released on June 29, 2021.
Reception
Although not an immediate success, Zombies Ate My Neighbors became a cult classic years after its release. Upon its release, it got above average praise, earning an 84.5% on Gamerankings.com. Reviewers of the game often cited its humor, two-player mode, graphics and music as some of its best aspects.
Mike Seiblier of Sega-16.com said the variety of weapons shows off the game's "tongue in cheek nature by giving you weapons and items like silverware, dishes, soda can grenades, a weed whacker, keys, bazookas as well as health packs". The Armchair Empire similarly praised the variety and strategy that the weapon system incorporated. They also made note of the "little details that make it so cool to play", saying "If you come across a door, which you don't have the key for, you can blow it open with the bazooka." Critics agreed the co-op mode is "highly recommended".
The game's "colorful and detailed" graphics have been praised as well as its soundtrack which Seibler called an "homage to the spooky, over the top music found in old, scary flicks". He went on to mention the sound effects are equally impressive. Corbie Dillard of Nintendolife.com said the graphics do not "exactly set new 16-bit standards, but they still manage to look sharp and the creative use of the darker color scheme used throughout the game really makes the creepy visuals come to life onscreen". He ended his review by affectionately calling the game a "second-rate horror movie" version of Contra.
Upon the game's release for the Wii Virtual Console, Zombies Ate My Neighbors received immense praise and earned an Editor's Choice Award from IGN. The game has been regarded as one of "the most requested additions to the VC system even before the Wii launch".
Accolades
Mega magazine ranked the game 42nd in their "Top 50 Mega Drive Games" in 1994. IGN ranks it the #48 best Super Nintendo game. Retro Sanctuary ranked the game 72nd in their "Top 100 Best SNES Games." They praised the game saying it’s a very fun horror themed shoot ‘em up and compared the game like a light-hearted version of Alien Syndrome In 2018, Complex listed the game 48th on its "The Best Super Nintendo Games of All Time.” They felt the game was amazing and the only criticism they had was the levels got a little repetitive. In 2017, Gamesradar rated the game 21st on their "Best Sega Genesis/Mega Drive games of all time"
Legacy
In 1997, LucasArts released a game for Sony's PlayStation and Sega's Sega Saturn titled Herc's Adventures, which uses the same basic gameplay format and mechanics as Zombies Ate My Neighbors. Programmer Chris Long cited Zombies Ate My Neighbors as a major influence on his 1997 game Swagman.
Day of the Tentacle, another game developed by LucasArts, is referenced in Zombies Ate My Neighbors through a secret level. Comparisons to the game Dead Rising, released for the Xbox 360 in 2006, have been drawn, Lucas Thomas of IGN saying "Zombies Ate My Neighbors is basically a comical 16-bit template for the new Xbox 360 release, Dead Rising. And like that game, this one arms you with a pretty bizarre arsenal. Weed whackers, exploding soda cans, and flying silverware all make an appearance to help you, or you and a friend, put a hurt on these living dead."
Sequels and spin-offs
A sequel entitled Ghoul Patrol was released in 1994, but was not as well-received as its predecessor. Originally, Ghoul Patrol was not intended to be released as a sequel to Zombies Ate My Neighbors, but was re-worked as such to increase sales.
, a film based on the game was in development. The film was being penned and produced by screenwriter and director John Darko, known for his work on James Wan's Insidious and Aaron Sims' Archetype. At the time of report, the film was in the process of securing rights from LucasArts and obtaining a director as well as financing.
Notes
References
External links
1993 video games
Cooperative video games
Horror video games
Konami games
LucasArts games
Nintendo Switch games
PlayStation 4 games
Xbox One games
Run and gun games
Sega Genesis games
Super Nintendo Entertainment System games
Video games scored by George Sanger
Video games developed in the United States
Video games featuring female protagonists
Virtual Console games
Video games about zombies
Windows games |
12709 | https://en.wikipedia.org/wiki/Goodtimes%20virus | Goodtimes virus | The Goodtimes virus, also styled as Good Times virus, was a computer virus hoax that spread during the early years of the Internet's popularity. Warnings about a computer virus named "Good Times" began being passed around among Internet users in 1994. The Goodtimes virus was supposedly transmitted via an email bearing the subject header "Good Times" or "Goodtimes", hence the virus's name, and the warning recommended deleting any such email unread. The virus described in the warnings did not exist, but the warnings themselves were, in effect, virus-like. In 1997 the Cult of the Dead Cow hacker collective announced that they had been responsible for the perpetration of the "Good Times" virus hoax as an exercise to "prove the gullibility of self-proclaimed 'experts' on the Internet".
History
The first recorded email warnings about the Good Times virus showed up on 15 November 1994. The first message was brief, a simple five sentence email with a Christmas greeting, advising recipients not to open email messages with the subject "GOOD TIMES!!", as doing so would "ruin" their files. Later messages became more intricate. The most common versions—the "Infinite loop" and "ASCII buffer" editions—were much longer, containing descriptions of what exactly Good Times would do to the computer of someone who opened it, as well as comparisons to other viruses of the time, and references to a U.S. Federal Communications Commission warning. The warning emails themselves usually contained the very subject line warned against.
Sample email
Purported effects
The longer version of the Good Times warning contained descriptions of what Good Times was supposedly capable of doing to computers. In addition to sending itself to every email address in a recipient's received or sent mail, the Good Times virus caused a wide variety of other effects. For example, one version said that if an infected computer contained a hard drive, it could be destroyed. If Good Times was not stopped in time, an infected computer would enter an "nth-complexity infinite binary loop" (a meaningless term), damaging the processor. The "ASCII" buffer email described the mechanism of Good Times as a buffer overflow.
Hoaxes similar to Good Times
A number of computer virus hoaxes appeared after the Good Times hoax had begun to be widely shared. These messages were similar in form to Good Times, warning users not to open messages bearing particular subject lines. Subject lines mentioned in these emails include "Penpal greetings", "Free Money", "Deeyenda", "Invitation", and "Win a Holiday".
The Bad Times computer virus warning is generally considered to be a spoof of the Good Times warning.
Viruses that function like Good Times
Developments in mail systems, such as Microsoft Outlook, without sufficient thought for security implications, made viruses that indeed propagate themselves via email possible. Notable examples include the Melissa worm, the ILOVEYOU virus, and the Anna Kournikova virus. In some cases, a user must open a document or program contained in an email message in order to spread the virus; in others, notably the Kak worm, merely opening or previewing an email message itself will trigger the virus.
Some e-mail viruses written after the Good Times scare contained text announcing that "This virus is called 'Good Times, presumably hoping to gain kudos amongst other virus writers by appearing to have created a worldwide scare. In general, virus researchers avoided naming these viruses as "Good Times", but an obvious potential for confusion exists, and some anti-virus tools may well detect a real virus they identify as "Good Times", though this will not be the cause of the original scare.
Spoofs
Weird Al Yankovic made a song parody of the virus titled "Virus Alert".
The Bad Times virus hoax was created years later.
References
External links
Internet memes
Virus hoaxes
1994 hoaxes |
1319366 | https://en.wikipedia.org/wiki/Sequence%20profiling%20tool | Sequence profiling tool | A sequence profiling tool in bioinformatics is a type of software that presents information related to a genetic sequence, gene name, or keyword input. Such tools generally take a query such as a DNA, RNA, or protein sequence or ‘keyword’ and search one or more databases for information related to that sequence. Summaries and aggregate results are provided in standardized format describing the information that would otherwise have required visits to many smaller sites or direct literature searches to compile. Many sequence profiling tools are software portals or gateways that simplify the process of finding information about a query in the large and growing number of bioinformatics databases. The access to these kinds of tools is either web based or locally downloadable executables.
Introduction and usage
The "post-genomics" era has given rise to a range of web-based tools and software to compile, organize, and deliver large amounts of primary sequence information, as well as protein structures, gene annotations, sequence alignments, and other common bioinformatics tasks.
In general, there exist three types of databases and service providers. The first one includes the popular public-domain or open-access databases supported by funding and grants such as NCBI, ExPASy, Ensembl, and PDB. The second one includes smaller or more specific databases organized and compiled by individual research groups Examples include Yeast Genome Database, RNA database. The third and final one includes private corporate or institutional databases that require payment or institutional affiliation to access. Such examples are rare given the globalization of public databases, unless the purported service is ‘in-development’ or the end point of the analysis is of commercial value.
Typical scenarios of a profiling approach become relevant, particularly, in the cases of the first two groups, where researchers commonly wish to combine information derived from several sources about a single query or target sequence. For example, users might use the sequence alignment and search tool BLAST to identify homologs of their gene of interest in other species, and then use these results to locate a solved protein structure for one of the homologs. Similarly, they might also want to know the likely secondary structure of the mRNA encoding the gene of interest, or whether a company sells a DNA construct containing the gene. Sequence profiling tools serve to automate and integrate the process of seeking such disparate information by rendering the process of searching several different external databases transparent to the user.
Many public databases are already extensively linked so that complementary information in another database is easily accessible; for example, Genbank and the PDB are closely intertwined. However, specialized tools organized and hosted by specific research groups can be difficult to integrate into this linkage effort because they are narrowly focused, are frequently modified, or use custom versions of common file formats. Advantages of sequence profiling tools include the ability to use multiple of these specialized tools in a single query and present the output with a common interface, the ability to direct the output of one set of tools or database searches into the input of another, and the capacity to disseminate hosting and compilation obligations to a network of research groups and institutions rather than a single centralized repository.
Keyword based profilers
Most of the profiling tools available on the web today fall into this category. The user, upon visiting the site/tool, enters any relevant information like a keyword e.g. dystrophy, diabetes etc., or GenBank accession numbers, PDB ID. All the relevant hits by the search are presented in a format unique to each tool’s main focus. Profiling tools based on keyword searches are essentially search engines that are highly specialized for bioinformatics work, thereby eliminating a clutter of irrelevant or non-scholarly hits that might occur with a traditional search engine like Google. Most keyword-based profiling tools allow flexible types of keyword input, accession numbers from indexed databases as well as traditional keyword descriptors.
Each profiling tool has its own focus and area of interest. For example, the NCBI search engine Entrez segregates its hits by category, so that users looking for protein structure information can screen out sequences with no corresponding structure, while users interested in perusing the literature on a subject can view abstracts of papers published in scholarly journals without distraction from gene or sequence results. The PubMed biosciences literature database is a popular tool for literature searches, though this service is nearly equaled with the more general Google Scholar.
Keyword-based data aggregation services like the Bioinformatic Harvester performs provide reports from a variety of third-party servers in an as-is format so that users need not visit the website or install the software for each individual component service. This is particularly invaluable given the rapid emergence of various sites providing different sequence analysis and manipulation tools. Another aggregative web portal, the Human Protein Reference Database (Hprd), contains manually annotated and curated entries for human proteins. The information provided is thus both selective and comprehensive, and the query format is flexible and intuitive. The pros of developing manually curated databases include presentation of proofread material and the concept of ‘molecule authorities’ to undertake the responsibility of specific proteins. However, the cons are that they are typically slower to update and may not contain very new or disputed data.
Sequence data based profilers
A typical sequence profiling tool carries this further by using an actual DNA, RNA, or protein sequence as an input and allows the user to visit different web-based analysis tools to obtain the information desired. Such tools are also commonly supplied with commercial laboratory equipment like gene sequencers or sometimes sold as software applications for molecular biology. In another public-database example, the BLAST sequence search report from NCBI provides a link from its alignment report to other relevant information in its own databases, if such specific information exists.
For example, a retrieved record that contains a human sequence will carry a separate link that connects to its location on a human genome map; a record that contains a sequence for which a 3-D structure has been solved would carry a link that connects it to its structure database. Sequerome, a public service tool, links the entire BLAST report to many third party servers/sites that provide highly specific services in sequence manipulations such as restriction enzyme maps, open reading frame analyses for nucleotide sequences, and secondary structure prediction. The tool provides added advantage of maintaining a research log of the operations performed by the user, which can be then conveniently archived using 'mail', 'print' or 'save' functionality. Thus an entire operation of researching on a sequence using different research tools and thus carrying a project to its completion can be completed within one browser interface. Consequently, future generation of sequence profiling tools would include ability to collaborate online with researchers to share project logs and research tools, annotate results of sequence analysis or lab work, customize and automate the processing of sets of sequence data etc. InstaSeq is a Google powered search tool that allows the user to directly enter a sequence and search the entire World Wide Web. This unique search engine, which is the only one of its kind, is in contrast to searching specific databases e.g. GenBank.
As a result, the user can end up with a privately hosted document or a page from a lesser known database from just about anywhere in the world. Though the presence of sequence based profilers are far and few in the present scenario, their key role will become evident when huge amounts of sequence data need to be cross processed across portals and domains.
Future growth and directions
The proliferation of bioinformatics tools for genetic analysis aids researchers in identifying and categorizing genes and gene sets of interest in their work; however, the large variety of tools that perform substantially similar aggregative and analytical functions can also confuse and frustrate new users. The decentralization encouraged by aggregative tools allows individual research groups to maintain specialized servers dedicated to specific types of data analysis in the expectation that their output will be collected into a larger report on a gene or protein of interest to other researchers.
Data produced by microarray experiments, two-hybrid screening, and other high-throughput biological experiments is voluminous and difficult to analyze by hand; the efforts of structural genomics collaborations that are aimed at quickly solving large numbers of highly varied protein structures also increase the need for integration between sequence and structure databases and portals. This impetus toward developing more comprehensive and more user-friendly methods of sequence profiling makes this an active area of research among current genomics researchers.
See also
Entrez
Metadata
Sequence analysis
Sequence motif
Sequerome
References
Bioinformatics software |
14996686 | https://en.wikipedia.org/wiki/Avaya%20Secure%20Router%204134 | Avaya Secure Router 4134 | The Avaya Secure Router 4134 (or SR-4134) in telecommunications and computer networking technologies is a device manufactured by Avaya that combines the functions of WAN Routing, stateful firewall security, Ethernet switching, IP telephony, and Microsoft mediation into one device. In addition to sharing many features with other routers such as VRRP, MPLS, and hot-switchable modules, the SR-4134 also guards against individual circuit failures, has the ability to recover from device failures in less than a second, and instantly restores bandwidth once a connection has been repaired. The system is very energy efficient, and can save the owner as much as 40% on energy total cost of ownership according to testing by the Tolly Group. In July 2011 it was integrated with the Silver Peak WAN optimization appliance to optimize the performance of enterprise voice, video, and unified communications (UC), to ensure that remote users have fast and reliable access to all centralized applications.
Operational Deployment
This system is normally installed at a headquarters, regional or branch office and connected across the wide area network to another router or secure router at a regional, branch or other smaller remote location.
Modules
Several telecommunication modules for use with T1/E1 and DS3 ports (clear channel or channelized) will allow the system to operate over a wide range of telecommunication circuits. In addition to supporting up to seventy-two power over ethernet ports, the secure router 4134 can also support up to thirty-one T1/E1 ports, fifty-eight gigabit ethernet switching ports, or up to sixty-four FXO or FXS ports.
The Firewall is capable of supporting Session Initiation Protocol (SIP) ALG, network translation and cone network translation for the UNIStim protocol).
Security
The Avaya secure router 4134 has fully integrated firewalls and VPNs for increased reliability; it also includes a stateful packet firewall and prevention of over 60 distributed denial of service attacks.
Cryptographic Module Validation (FIPS140-1 and FIPS 140-2)
The SR-2330 has been validated as conforming to the Digital Signature Algorithm (DSA) specified in both FIPS 186-2 with Change Notice 1 dated October 5, 2001 and FIPS 186-3 dated June 2009, both titled Digital Signature Standard (DSS).
NIST has validated the Secure Hash Algorithms
See also
Avaya
Network security
Avaya Professional Credentials
References
Further reading
External links
Secure Router 4134 Series - Retrieved 22 July 2011
S
S
Hardware routers
Innovative Communications Alliance products |
157596 | https://en.wikipedia.org/wiki/FlightGear | FlightGear | FlightGear Flight Simulator (often shortened to FlightGear or FGFS) is a free, open source multi-platform flight simulator developed by the project since 1997.
David Murr started the project on April 8, 1996. The project had its first release in 1997 and continued in development. It has specific builds for a variety of operating systems including Microsoft Windows, macOS, Linux, IRIX, and Solaris.
FlightGear is an atmospheric and orbital flight simulator used in aerospace research and industry. Its flight dynamics engine (JSBSim) is used in a 2015 NASA benchmark to judge new simulation code to the standards of the space industry.
History
FlightGear started as an online proposal in 1996 by David Murr, in Canada. He was dissatisfied with proprietary, available, simulators like the Microsoft Flight Simulator, citing motivations of companies not aligning with the simulators' players ("simmers"), and proposed a new flight simulator developed by volunteers over the Internet. The flight simulator was created using custom 3D graphics code. Development of an OpenGL based version was spearheaded by Curtis Olson starting in 1997. FlightGear incorporated other open-source resources, including the LaRCsim flight dynamics engine from NASA, and freely available elevation data. The first working binaries using OpenGL came out in 1997. By 1999 FlightGear had replaced LaRCsim with JSBSim built to the sims' needs, and in 2015 NASA used JSBSim alongside 6 other space industry standards to create a measuring stick to judge future space industry simulation code.
FlightGear reached 1.0 in 2007, 2.0 in 2010, and there were 9 major releases under 2.x and 3.x labels, with the final one under the previous numbering scheme being "3.4", since "3.6" was cancelled. The project moved to a regular release cadence with 2-4 releases per year since 2016, with the first version under the new naming scheme being "2016.1". Around that time, the graphical front end "FlightGear Launch Control", also known as "FGRun", was replaced by a hard-coded Qt launcher. FlightGear's source code is released under the terms of the GNU General Public License and is free and open-source software.
The FlightGear project has been nominated by SourceForge, and subsequently chosen as project of the month by the community, in 2015, 2017, and 2019.
Simulator Features
Physics
Forces experienced by a flying craft depend on the time-varying state of atmospheric fluid flow along the flight path - the atmosphere being a fluid that can exchange energy, exchange moisture or particles, change phase or other state, and exert force with boundaries formed by surfaces. Fluid behaviour is often characterised by eddies(Videos:aircraft , terrain) or vortices on varying scales down to the microscopic, but is harder to observe as the air is clear except for moisture phase changes like condensation trails or clouds. The atmosphere-terrain boundary interaction follows fluid dynamics, just with processes on hugely varying scales and 'weather' is the planetary boundary layer. The aircraft surface interaction works with the same dynamics, but on a limited range of scales. Forces experienced at any point along a flight path, therefore, are the result of complicated atmospheric processes on varying spatial scales, and complex flow along the craft's surface. Craft also experience varying gravitational force based on the 3d shape of the potential well and the non-spherical shape of the Earth.
Atmospheric & Environmental Physics
FlightGear can simulate the atmosphere ranging from energy inputs/outputs to the system, like energy from the sun or volcanic sources, through to fluid flow on various scales and changes of state. FlightGear is able to model different surface characteristics such as heating or cooling, and the exchange of heat and moisture with the atmosphere depending on factors like windflow or dew point. FlightGear models the continuously evolving life-cycle of phenomena on various scales, driven by interaction of fluid with terrain. They range from turbulence on different scales to, individual thermals, thunderstorms, through to moving air layers, and depicting air-masses on the scale of 1000s of Kilometers. Atmospheric water is modeled by FlightGear ranging from state changes such as condensing into cloud or haze layers, along with energy provided from latent heat to drive convective fluid flow, through to precipitation as rain droplets, snow, or hail.
The process of generating lift creates turbulence with vortices, and FlightGear models wake turbulence with shedding of wingtip vortices by flown craft as well as AI craft.
FlightGear also has a less physically accurate model that uses METAR weather updates of differing frequency, designed for safe operation of aerodromes,to dis-continuously force atmosphere based on attempted guesses of processes that are fundamentally constrained by the closeness or density of observation stations, as well as the small-scale, limited, rounded off, non-smoothly varying, and need-to-know precision of information. Aloft waypoint settings modelling high altitude behaviors of wind can be synced to updates from Jeppeson.
Flightgear has a simulation of planetary bodies in the solar system which is used for purposes like driving latitude dependent weather from solar radiation, as well as the brightness and position of stars for celestial navigation. There is a model of gravity based on a non-spherical Earth, and craft can even experience differing gravity across their bodies which will exert twisting force. A model of the observed variation in the Earth's complex magnetic field, and the option to simulate, to an extent, the propagation of radio wave signals due to interaction with different types of terrain, also exists in FlightGear.
FlightGear uses an exact, non-spherical, model of Earth, and is also able to simulate flight in polar regions and airports (arctic or antarctic) without simulator errors due to issues with coordinate systems.
Flight Dynamics
FlightGear supports multiple flight dynamics engines with differing approaches, and external sources such as MATLAB/Simulink, as well as custom flight models for hot air balloons and spacecraft.
JSBSim
JSBSim is a data driven flight dynamics engine with a C++ core built to the needs of the FlightGear project from 1996 to replace NASA's LaRCSim, and integrated into FlightGear as the default from 1999. Flight characteristics are preserved despite low frame rate, as JSBSim physics are decoupled from rendering and tick at 120 Hz by default. This also supports high time-acceleration as rendering does not have to be done faster causing the GPU to be a bottleneck.
Mass balance, ground reactions, propulsion, aerodynamics, buoyant forces, external forces, atmospheric forces, and gravitational forces can be utilized by JSBSim, the current default flight dynamics engine supported by FlightGear, to determine flight characteristics. JSBSim supports non-terrestrial atmospheres and has been used to model unmanned flight in the Martian atmosphere by NASA.
Benchmark testing by NASA
JSBSim was used by NASA in 2015 with other space industry simulation code, both to establish a ruler to judge future code for the requirements and standards of the space industry, as well as check agreement. The verification tested both atmospheric and orbital flight in 6-degrees-of-freedom for simulations like JSBSim that supported both. The results from 6 participants consisting of NASA Ames Research Center (VMSRTE), Armstrong Flight Research Center (Core), Johnson Space Center (JEOD), Langley Research Center (LaSRS++, POST-II), Marshall Space Flight Center (MAVERIC), and JSBSim were anonymous as NASA wanted to encourage participation. However, the assessment found agreement for all test cases between the majority of participants, with the differences being explainable and reducible for the rest, and with the orbital tests agreeing "quite well" for all participants.
YASim
YASim's approach to flight dynamics uses the geometry of the aircraft present in the 3d model art at startup, conceptually similar to Blade element theory used by some software, to calculate a rough approximation of fluid dynamics - with the conceptual problems that each "element" is considered in isolation therefore missing affecting fluid flow to other elements, and the approximation breaking down for craft in transonic to hypersonic regimes. By contrast, offline approaches like JSBSim can incorporate windtunnel data. They can also incorporate the results of computational fluid dynamics which can reach computable accuracy only limited by the nature of the problem and present day computational resources.
FlightGear also supports LaRCsim and UIUC.
Time acceleration
FlightGear is able to accelerate and decelerate time, speeding up or slowing down the simulation. Time acceleration is a critical feature for simulating longer flights and space missions. For all interactions with the simulator, it allows people to speed up uneventful parts, and gain more experience (decisions and problem solving). It also means automated simulations used for research finish faster - this is helped by FlightGear's headless mode.
FlightGear is able to support high time accelerations by allowing parts of the simulation to run at different rates. This allows saving of CPU and GPU resources by letting unimportant parts of the simulation, like visuals or less time-sensitive aircraft systems, run at slower rates. It also improves performance. Separate clocks are available for JSBSim physics, different parts of aircraft systems, as well as environment simulations at large scale (celestial simulation) and small scale (weather physics).
Rendering and visual cues
Atmosphere rendering
Flightgear's atmospheric rendering is able to provide constantly changing visual cues of processes affecting atmospheric fluid flow and their likely evolution and history - to make prediction of conditions ahead or when returning at a later time possible. Simulation of directional light scattering by the Advanced Light Scattering framework in the atmosphere shows the 3d distribution, layering, geometry, and even statistical orientation of particles in different scattering regimes like Mie or Rayleigh. This ranges from different moisture droplets, to smog, to ice crystals of different geometry in clouds or halos.
Cloud rendering
The 3d density distribution of cloud (or condensation trail) moisture rendered by FlightGear acts as a cue to the corresponding 3d structure of fluid flow, such as the up and down draft loop of storm cell, internal gravity waves forming undulating cloud bands signalling a sweeping cold front, or windshear shaping cirrus clouds at higher altitude.
Precipitation and accumulation rendering
FlightGear is able to render rain falling from specific clouds in rain volumes containing the correct droplet size to determine the properties like thickness and intensity of rainbows. Perceptual phenomena like rain streaks are rendered with streak length shortening as time is slowed. Rain and water spray streaks on canopy glass provides cues to the relative air flow, while frost and fog with correct light scattering provide cues to temperature.
FlightGear is able to render specified historical accumulation levels of water and snow accounting for flatness on the surfaces of for both terrain and buildings. This provides cues to surface moisture or friction, and weather driven by surface heating that reduces with snow thickness. FlightGear can render gradual snow and ice cover on inland and ocean water.
Hazes and Halos
Layering of hazes is rendered by FlightGear, such as low lying ground haze with 3d structure, smog related to human activity, and dust. FlightGear renders various halos due to ice crystals in the atmosphere, or due to Mie scattering in fog by artificial lights such as landing lights.
Orbital rendering
FlightGear is able to render day/night visuals of Earth from orbit at high detail with scattering due to clouds, dust, and moisture, as well as effects such as lightning illuminating storm cells. Orientation cues in cockpit are provided by changing colour of light from Sun, Earth, and Moon for craft such as the Space Shuttle. The gradual transition in lighting for spacecraft, between upper and lower atmosphere regimes, is handled by dedicated rendering code. Auroras are simulated with varying intensity and varying penetration of magnetic flux tubes into the atmosphere. They are visible from both space and ground.
Accurate rendering of planets, moons, and stars with correct phases/brightness based on FlightGear's celestial simulation allows cues or data for celestial navigation - without reliance on vulnerable ground aids, including of pre-GPS era craft. The celestial simulation allows craft such as the Space Shuttle to use star tracker instruments.
Environment rendering
Flightgear's Advanced Light Scattering framework simulates locations in time as well as space. The environment simulation renders seasonal change as leaves of different species of trees, bushes, and grass change colour or fall. Simulated swaying of grass, trees and windsocks provide cues to processes changing the windfield near the ground, while wave simulation provides cues near water. Cloud shadows and the general state of the atmosphere affect light traveling to each point of the environment and then traveling in the atmosphere to reach the eye - the cloud setup and particle spread in the atmosphere changes the colour of the light cast on the environment. Water colour therefore changes based on atmosphere overhead, and also depends on water impurites in a region. FlightGear is capable of rendering a variety of volcanic activity of different intensity that, from v2019.2, responds to the windfield, as well as smoke.
The combination of rendering of the state of atmospheric processes, Aurora, simulation of celestial bodies, ground accumulation of rain or snow or dust, ice cover of water, and the environment simulation produces visualisations with a vast number of permutations.
Multiplayer
Several networking options allow to communicate with other instances of . A multiplayer protocol is available for using on a local network in a multi aircraft environment. This can be used for formation flight or air traffic control simulation. Soon after the original Multiplayer Protocol became available, it was expanded to allow playing over the internet. It is possible to see other players in the simulator if they have the same aircraft models and viewing their flight path is possible with the simulator's online multiplayer map.
Several instances of can be synchronized to allow for a multi-monitor environment.
Weather
uses metar data to produce live weather patterns in real time. Detailed weather settings allow for 3d clouds, a variety of cloud types, and precipitation. Precipitation and terrain affect turbulence and cloud formations. Aloft waypoint settings allow high altitude behaviors of wind to be modeled from live weather information, and thermals can also be modeled.
Critical reception
Although not developed or typically analyzed solely as a game in the traditional sense, has nevertheless undergone reviews in a number of online and offline publications, and received positive reviews as a flight simulator game. 1.0.0 was noted as being impressive for a game over a decade in the making, with a wide variety of aircraft and features.
PC Magazine noted how it is designed to be easy to add new aircraft and scenery. Linux Format reviewed version 2.0 and rated it 8/10.
Controversy
In June 2014 Honda lawyers issued a takedown request in which it was claimed that the HondaJet model in the simulator infringes on Honda's trademarks. Subsequently, HondaJet became the first model removed from the simulator due to legal reasons.
Games journalist Tim Stone, in his simulation column The Flare Path, criticized the practice of third-parties attempting to profit from the work of community volunteers to the project, pointing to deceptive practices of stealing media available online from other sims to misrepresent VirtualPilot3d, as well as highlighting allegedly fake customer testimonials. Following up in 2018, Tim Stone wrote a second column in which he again criticized the "ethical standards" and "extraordinary willingness to lie in the pursuit of sales" displayed by the advertisements.
Applications and usages
FlightGear has been used in a range of projects in academia and industry (including NASA). The application has also been used for pilot training and as a research and development platform by various agencies and universities.
The simulator has been used by numerous institutes and companies, such as NASA/Ames Human Centered System Lab. Pragolet s.r.o. and the Endless Runway Project; a consortium of several European aerospace institutes.
Companies
MathWorks FlightGear to Simulink interface.
NASA/Ames Human Centered System Lab - 737NG full scale cockpit simulator.
Pragolet s.r.o. for light and ultra-light sports aircraft.
PAL-V Europe NV
Max Planck Institute for Biological Cybernetics, Germany, HeliLab and MPI CyberMotion Simulator
Institute for Scientific Research
Endless Runway Project
Endless Runway Project, consortium of several European aerospace institutes.
Universities
Africa
Minia University, Egypt
Asia
The Department of Aircraft and Aeroengine from the Chinese Air Force Engineering University
Nanjing University of Aeronautics and Astronautics, China
Shenyang Institute of Automation, China
Australia
RMIT University, Melbourne, Australia
Europe
Institute of Aerospace Engineering at the RWTH Aachen
University of Naples, Italy
University of Wales Intelligent Robotics Group, Aberystwyth, UK
Delft University of Technology, the Netherlands
Hamburg University of Applied Sciences, Germany
Technical University of Munich
Czech Technical University in Prague
French Aerospace Lab (ONERA) and University of Toulouse, France
Pázmány Péter Catholic University and the Hungarian Academy of Sciences
University of Sheffield, England
Supaéro
Durham University, England
North America
University of Tennessee, Chattanooga, USA
Northeastern University, Boston, USA
Arizona State University, USA
The Center for Coastal & Ocean Mapping/Joint Hydrographic Center at the University of New Hampshire, USA
University of Michigan, USA
University of Toronto Institute for Aerospace Studies, Canada
Purdue University, Indiana, USA
University of Arizona, USA
South America
National Technological University, Haedo, Argentina
Universidade Federal de Minas Gerais, Brazil
See also
Microsoft Flight Simulator
List of open source games
X-Plane (simulator)
GeoFS
YSFlight
List of free and open source software packages
Lockheed Martin Prepar3D
References
External links
About FlightProSim, Flight Simulator Plus, ProFlightSimulator and EarthFlightSim
1997 software
1997 video games
2007 software
Cross-platform software
Flight simulation video games
Free software programmed in C++
General flight simulators
Linux games
Open-source video games |
4045710 | https://en.wikipedia.org/wiki/Computer%20fan | Computer fan | A computer fan is any fan inside, or attached to, a computer case used for active cooling. Fans are used to draw cooler air into the case from the outside, expel warm air from inside and move air across a heat sink to cool a particular component. Both axial and sometimes centrifugal (blower/squirrel-cage) fans are used in computers. Computer fans commonly come in standard sizes, such as 120mm (most common), 140mm, 240mm, and even 360mm. Computer fans are powered and controlled using 3-pin or 4-pin fan connectors.
Usage of a cooling fan
While in earlier personal computers it was possible to cool most components using natural convection (passive cooling), many modern components require more effective active cooling. To cool these components, fans are used to move heated air away from the components and draw cooler air over them. Fans attached to components are usually used in combination with a heat sink to increase the area of heated surface in contact with the air, thereby improving the efficiency of cooling. Fan control is not always an automatic process. A computer's BIOS can control the speed of the built-in fan system for the computer. A user can even supplement this function with additional cooling components or connect a manual fan controller with knobs that set fans to different speeds.
In the IBM PC compatible market, the computer's power supply unit (PSU) almost always uses an exhaust fan to expel warm air from the PSU. Active cooling on CPUs started to appear on the Intel 80486, and by 1997 was standard on all desktop processors. Chassis or case fans, usually one exhaust fan to expel heated air from the rear and optionally an intake fan to draw cooler air in through the front, became common with the arrival of the Pentium 4 in late 2000.
Applications
Case fan
Fans are used to move air through the computer case. The components inside the case cannot dissipate heat efficiently if the surrounding air is too hot. Case fans may be placed as intake fans, drawing cooler outside air in through the front or bottom of the chassis (where it may also be drawn over the internal hard drive racks), or exhaust fans, expelling warm air through the top or rear. Some ATX tower cases have one or more additional vents and mounting points in the left side panel where one or more fans may be installed to blow cool air directly onto the motherboard components and expansion cards, which are among the largest heat sources.
Standard axial case fans are 40, 60, 80, 92, 120, 140, 200 and 220 mm in width and length. As case fans are often the most readily visible form of cooling on a PC, decorative fans are widely available and may be lit with LEDs, made of UV-reactive plastic, and/or covered with decorative grilles. Decorative fans and accessories are popular with case modders. Air filters are often used over intake fans, to prevent dust from entering the case and clogging up the internal components. Heatsinks are especially vulnerable to being clogged up, as the insulating effect of the dust will rapidly degrade the heatsink's ability to dissipate heat.
PSU fan
While the power supply (PSU) contains a fan with few exceptions, it is not to be used for case ventilation. The hotter the PSU's intake air is, the hotter the PSU gets. As the PSU temperature rises, the conductivity of its internal components decrease. Decreased conductivity means that the PSU will convert more of the input electric energy into thermal energy (heat). This cycle of increasing temperature and decreased efficiency continues until the PSU either overheats, or its cooling fan is spinning fast enough to keep the PSU adequately supplied with comparatively cool air. The PSU is mainly bottom-mounted in modern PCs, having its own dedicated intake and exhaust vents, preferably with a dust filter in its intake vent.
CPU fan
Used to cool the CPU (central processing unit) heatsink. Effective cooling of a concentrated heat source such as a large-scale integrated circuit requires a heatsink, which may be cooled by a fan; use of a fan alone will not prevent overheating of the small chip.
Graphics card fan
Used to cool the heatsink of the graphics processing unit or the memory on graphics cards. These fans were not necessary on older cards because of their low power dissipation, but most modern graphics cards designed for 3D graphics and gaming need their own dedicated cooling fans. Some of the higher powered cards can produce more heat than the CPU (dissipating up to 350 watts), so effective cooling is especially important. Since 2010, graphics cards have been released with either axial fans, or a centrifugal fan also known as a blower, turbo or squirrel cage fan.
Chipset fan
Used to cool the heatsink of the northbridge of a motherboard's chipset; this may be needed where the system bus is significantly overclocked and dissipates more power than as usual, but may otherwise be unnecessary. As more features of the chipset are integrated into the central processing unit, the role of the chipset has been reduced and the heat generation reduced also.
Hard drive cooling
Fans may be mounted next to or onto a hard disk drive for cooling purposes. Hard drives can produce considerable heat over time, and are heat-sensitive components that should not operate at excessive temperatures. In many situations, natural convective cooling suffices, but in some cases fans may be required. These may include -
Faster-spinning hard disks with greater heat production. ( less expensive drives rotated at speeds up to 7,200 RPM; 10,000 and 15,000 RPM drives were available but generated more heat.)
Large or dense arrays of disks (including server systems where disks are typically mounted densely)
Any disks which, due to the enclosure or other location they are mounted in, cannot easily cool without fanned air.
Multiple purposes
A case fan may be mounted on a radiator attached to the case, simultaneously operating to cool a liquid cooling device's working fluid and to ventilate the case. In laptops, a single blower fan often cools a heat sink connected to both CPU and GPU using heat pipes. In gaming laptops and mobile workstations, two or more heavy duty fans may be used. In rack-mounted servers, a single row of fans may operate to create an airflow through the chassis from front to rear, which is directed by passive ducts or shrouds across individual components' heat sinks.
Other purposes
Fans are, less commonly, used for other purposes such as:
Water-cooling radiator transfers a lot of heat, and radiator fans have large static pressure (opposed to case fans that have high airflow) for dissipating heat.
Laptop computers lack large openings in the case for warm air to escape. The laptop may be placed on a cooler – somewhat like a tray with fans built in – to ensure adequate cooling.
Some high-end machines (including many servers) or when additional reliability is required, other chips like SATA/SAS controller, high speed networking controllers (40Gbps Ethernet, Infiniband), PCIe switches, coprocessor cards (for example some Xeon Phi), some FPGA chips, south bridges are also actively cooled with a heatsink and a dedicated fan. These can be on a main motherboard itself or as a separate add-on board, often via PCIe card.
Expansion slot fan a fan mounted in one of the PCI or PCI Express slots, usually to supply additional cooling to the graphics cards, or to expansion cards in general.
Optical drive fan some internal CD and/or DVD burners included cooling fans.
Memory fan modern computer memory can generate enough heat that active cooling may be necessary, usually in the form of small fans positioned above the memory chips. This applies especially when the memory is overclocked or overvolted, or when the memory modules include active logic, such as when a system uses Fully Buffered DIMMs (FB-DIMMs). However, with newer lower voltages in use, such as 1.2v DDR4, this is less commonly needed than used to be the case.. Most of the time memory modules, located close to CPU will receive enough of the air flow from the case or CPU fan, even if the air from CPU fan and radiator is warm. If the main CPU is water cooled, this small amount of airflow might be missing, and additional care about some airflow in a case or a dedicated memory cooling is required. Unfortunately most memory modules do not provide temperature monitoring to easily measure it.
High power voltage regulators (VRM) often using switch mode power supplies do generate some heat due to power losses, mostly in the power MOSFET and in an inductor (choke). These, especially in overclocking situations require active cooling fan together with heatsink. Most of the MOSFETs will operate correctly at very high temperature, but their efficiency will be lowered and potentially lifespan limited. Proximity of electrolytic capacitors to a source of heat, will decrease their lifespan considerably and end in a progressively higher power losses and eventual (catastrophic) failure.
Physical characteristics
Due to the low pressure, high volume air flows they create, most fans used in computers are of the axial flow type; centrifugal and crossflow fans type. Two important functional specifications are the airflow that can be moved, typically stated in cubic feet per minute (CFM), and static pressure. Given in decibels, the sound volume figure can be also very important for home and office computers; larger fans are generally quieter for the same CFM.
Many gamers, case modders, and enthusiasts utilize fans illuminated with colored LED lights. Multi-colored fans are also available. Colors and lighting patterns may be controlled or programmed via an RGB fan controller, similar to Christmas lights. There isn't much difference between the performance of normal case fans and the ones that come with RGB lights. However, RGB illuminated case fans add more aesthetics to your build. Asus aura sync case fans are highly popular among custom PC builders.
Dimensions
The dimensions and mounting holes must suit the equipment that uses the fan. Square-framed fans are usually used, but round frames are also used, often so that a larger fan than the mounting holes would otherwise allow can be used (e.g., a 140 mm fan with holes for the corners of a 120 mm square fan). The width of square fans and the diameter of round ones are usually stated in millimeters. The dimension given is the outside width of the fan, not the distance between mounting holes. Common sizes include 40 mm, 60 mm, 80 mm, 92 mm, 120 mm and 140 mm, although
8 mm, 17 mm, 20 mm, 25 mm, 30 mm, 35 mm, 38 mm, 45 mm, 50 mm, 70 mm, 200 mm, 220 mm, 250 mm and 360 mm sizes are also available. Heights, or thickness, are typically 10 mm, 15 mm, 25 mm or 38 mm.
Typically, square 120 mm and 140 mm fans are used where cooling requirements are demanding, as for computers used to play games, and for quieter operation at lower speeds. Larger fans are usually used for cooling case, CPUs with large heatsink and ATX power supply. Square 80 mm and 92 mm fans are used in less demanding applications, or where larger fans would not be compatible. Smaller fans are usually used for cooling CPUs with small heatsink, SFX power supply, graphics cards, northbridges, etc.
Rotational speed
The speed of rotation (specified in revolutions per minute, RPM) together with the static pressure determine the airflow for a given fan. Where noise is an issue, larger, slower-turning fans are quieter than smaller, faster fans that can move the same airflow. Fan noise has been found to be roughly proportional to the fifth power of fan speed; halving the speed reduces the noise by about 15 dB. Axial fans may rotate at speeds of up to around 38,000 rpm for smaller sizes.
Fans may be controlled by sensors and circuits that reduce their speed when temperature is not high, leading to quieter operation, longer life, and lower power consumption than fixed-speed fans. Fan lifetimes are usually quoted under the assumption of running at maximum speed and at a fixed ambient temperature.
Air pressure and flow
A fan with high static pressure is more effective at forcing air through restricted spaces, such as the gaps between a radiator or heatsink; static pressure is more important than airflow in CFM when choosing a fan for use with a heatsink. The relative importance of static pressure depends on the degree to which the airflow is restricted by geometry; static pressure becomes more important as the spacing between heatsink fins decreases. Static pressure is usually stated in either mm Hg or mm H2O.
Bearing types
The type of bearing used in a fan can affect its performance and noise. Most computer fans use one of the following bearing types:
Sleeve bearings use two surfaces lubricated with oil or grease as a friction contact. They often use porous sintered sleeves to be self-lubricating, requiring only infrequent maintenance or replacement. Sleeve bearings are less durable at higher temperatures as the contact surfaces wear and the lubricant dries up, eventually leading to failure; however, lifetime is similar to that of ball-bearing types (generally a little less) at relatively low ambient temperatures. Sleeve bearings may be more likely to fail at higher temperatures, and may perform poorly when mounted in any orientation other than vertical. The typical lifespan of a sleeve-bearing fan may be around 30,000 hours at . Fans that use sleeve bearings are generally cheaper than fans that use ball bearings, and are quieter at lower speeds early in their life, but can become noisy as they age.
Rifle bearings are similar to sleeve bearings, but are quieter and have almost as much lifespan as ball bearings. The bearing has a spiral groove in it that pumps fluid from a reservoir. This allows them to be safely mounted with the shaft horizontal (unlike sleeve bearings), since the fluid being pumped lubricates the top of the shaft. The pumping also ensures sufficient lubricant on the shaft, reducing noise, and increasing lifespan.
Fluid bearings (or "Fluid Dynamic Bearing", FDB) have the advantages of near-silent operation and high life expectancy (though not longer than ball bearings), but tend to be more expensive.
Ball bearings: Though generally more expensive than fluid bearings, ball bearing fans do not suffer the same orientation limitations as a sleeve bearing fans, are more durable at higher temperatures, and are quieter than sleeve-bearing fans at higher rotation speeds. The typical lifespan of a bal- bearing fan may be over 60,000 hours at .
Magnetic bearings or maglev bearings, in which the fan is repelled from the bearing by magnetism.
Connectors
Connectors usually used for computer fans are the following:
Three-pin Molex connector KK family
This Molex connector is used when connecting a fan to the motherboard or other circuit board. It is a small, thick, rectangular in-line female connector with two polarizing tabs on the outer-most edge of one long side. Pins are square and on a 0.1 inch (2.54 mm) pitch. The three pins are used for ground, +12 V power, and a tachometer signal. The Molex part number of receptacle is 22-01-3037. The Molex part number of the individual crimp contacts is 08-50-0114 (tin plated) or 08-55-0102 (semi gold plated). The matching PCB header Molex part number is 22-23-2031 (tin plated) or 22-11-2032 (gold plated). A corresponding wire stripper and crimping tools are also required.
Four-pin Molex connector KK family
This is a special variant of the Molex KK connector with four pins but with the locking/polarisation features of a three-pin connector. The additional pin is used for a pulse-width modulation (PWM) signal to provide variable speed control. These can be plugged into 3-pin headers, but will lose their fan speed control. The Molex part number of receptacle is 47054-1000. The Molex part number of individual crimp contacts is 08-50-0114. The Molex part number of the header is 47053-1000.
Four-pin Molex connector
This connector is used when connecting the fan directly to the power supply. It consists of two wires (yellow/5 V and black/ground) leading to and splicing into a large in-line four-pin male-to-female Molex connector. The other two wires of the connector provide 12V (red) and ground (black too), and are not used in this case. This is the same connector as used on hard drives before the SATA became standard.
Three-pin Molex connector PicoBlade family
This connector is used with notebook fans or when connecting the fan to the video card.
Dell proprietary
This proprietary Dell connector is an expansion of a simple three-pin female IC connector by adding two tabs to the middle of the connector on one side and a lock-tab on the other side. The size and spacing of the pin sockets is identical to a standard three-pin female IC connector and three-pin Molex connector. Some models have the wiring of the white wire (speed sensor) in the middle, whereas the standard 3-pin Molex connector requires the white wire as pin #3, thus compatibility issues may exist.
Others
Some computer fans use two-pin connectors, of various designs.
Alternatives
If a fan is not desirable, because of noise, reliability, or environmental concerns, there are some alternatives. Some improvement can be achieved by eliminating all fans except one in the power supply which also draws hot air out of the case.
Systems can be designed to use passive cooling alone, reducing noise and eliminating moving parts that may fail. This can be achieved by:
Natural convection cooling: carefully designed, correctly oriented, and sufficiently large heatsinks can dissipate up to 100 W by natural convection alone
Heatpipes to transfer heat out of the case
Undervolting or underclocking to reduce power dissipation
Submersive liquid cooling, placing the motherboard in a non-electrically conductive fluid, provides excellent convection cooling and protects from humidity and water without the need for heatsinks or fans. Special care must be taken to ensure compatibility with adhesives and sealants used on the motherboard and ICs. This solution is used in some external environments such as wireless equipment located in the wild.
Other methods of cooling include:
Water cooling
Mineral oil
Liquid nitrogen
Refrigeration, e.g. by Peltier effect devices
Ionic wind cooling is being researched, whereby air is moved by ionizing air between two electrodes. This replaces the fan and has the advantage of no moving parts and less noise.
See also
Glossary of computer hardware terms
Fan (machine)
Centrifugal fan
Computer cooling
Computer fan control
Small form factor (SFF)
Software programs for controlling PC fans: Argus Monitor and SpeedFan
References
External links
4-Wire PWM Controlled Fans Specification v1.3 – Intel
3-Wire and 4-Wire Fan Connectors – Intel
3-Wire and 4-Wire Fan Pinouts – AllPinouts
How PC Fans Work (2/3/4-wire) – PCB Heaven
Why and How to Control (2/3/4-wire) Fan Speed for Cooling Electronic Equipment – Analog Devices
PWM Fan Controller project – Alan's Electronic Projects
Asus Aura RGB Fans - RGB Advisor
Computer hardware cooling
Ventilation fans |
11847421 | https://en.wikipedia.org/wiki/Mediaroom | Mediaroom | Mediaroom is a collection of software for operators to deliver IPTV (IPTV) subscription services, including content-protected, live, digital video recorder, video on demand, multiscreen, and applications. These services can be delivered via a range of devices inside and outside customers' homes, including wired and Wi-Fi set top boxes, PCs, tablets, smartphones and other connected devices – over both the operator's managed IP networks as well as "over the top" (OTT) or unmanaged networks.
According to a marketing firm, Mediaroom was the market leader in IPTV for 2014.
History
Microsoft TV platform
Microsoft announced an UltimateTV service from DirecTV in October 2000, based on technology acquired from WebTV Networks (later renamed MSN TV).
The software was called the Microsoft TV platform (which included the Foundation Edition); it had integrated digital video recorder (DVR) and Internet access capabilities. It was released on October 26, 2000. The software to decode and view digital video programming was derived from WebTV (later called MSN TV). UltimateTV had support for picture-in-picture and could record up to 35 hours of video content. The Internet capabilities were provided by Microsoft TV platform software, which was used for the TV guide. The TV guide could display programming schedule for 14 days, and recording could be scheduled for any of the shows. It could also be used to access E-mail. However, Microsoft lost distribution when DirecTV accepted an acquisition bid by Echostar, who had their own DVR. By 2003, it was taken off the market, even though it is still supported by DirecTV and the acquisition by Echostar failed.
The UltimateTV developers in Mountain View, California were eliminated by early 2002.
By June 2002, Moshe Lichtman replaced Jon DeVaan as leader of the division as more reductions were announced.
Foundation Edition
The Microsoft TV Foundation Edition platform integrated video-on-demand (VOD), DVR and HDTV programming with live television programming. It includes an electronic programming guide (EPG) that could be used to access any supported service from a centralized directory. The EPG could be used to search and filter the listings as well. The EPG was released around 2002. Comcast announced it would adopt this software in May 2004. Microsoft TV Foundation Edition platform also included an authoring environment that could be used to create content consumable from the set top box.
IPTV Edition
Microsoft TV IPTV Edition is an IPTV platform for accessing both on-demand as well as live television content over a 2-way IP network, coupled with DVR functionality. It is to be used with cable networks that have an IPTV infrastructure.
Microsoft Mediaroom
The IPTV platform was renamed Microsoft Mediaroom on June 18, 2007 at the NXTcomm conference. In January 2010, Microsoft Mediaroom 2.0 was announced at the International Consumer Electronics Show. On April 8, 2013, Microsoft and Ericsson announced plans for Ericsson to purchase Mediaroom. The sale was completed on September 5, 2013, and the platform officially became Ericsson Mediaroom.
Mediaroom
On February 6, 2014, Ericsson announced it had entered into an agreement to purchase multiscreen video platform company Azuki Systems. Azuki Systems was renamed Ericsson Mediaroom Reach.
MediaKind
On July 10, 2018, it was announced that the new identity of Ericsson Media Solutions is MediaKind. The CEO is Allen Broome.
Products
Current key products in Mediaroom’s portfolio include Mediaroom, Mediaroom Reach, and MediaFirst TV Platform.
As of June 2016, Mediaroom TV was used in 65 commercial deployments in 34 countries, delivering services to over 16 million households via more than 30 million devices.
Mediaroom TV platforms are offered by 90 operators, including AT&T, Deutsche Telekom, CenturyLink, Telus, Hawaiian Telcom, Bell Canada (including Bell MTS), Hargray, Singtel, Telefónica SA, Cross Telephone, and Portugal Telecom.
See also
Windows Media Center
Interactive television
Smart TV
List of smart TV platforms and middleware software
10-foot user interface
Set-top box
Tasman (browser engine)
Xbox Video
References
External links
Mediaroom – official website.
Microsoft TV homepage
Streaming television
Microsoft software |
55072392 | https://en.wikipedia.org/wiki/Felice%20Trojani | Felice Trojani | Felice Trojani (18 April 1897 – 3 November 1971) was an Italian airship and airplane engineer.
He collaborated with Umberto Nobile and participated in the preparation and shipment of the dirigible to the North Pole, which was lost in the fleeting flight of 1928 on the polar banchisa. Trojani was one of the survivors of the disaster and, along with his comrades, was saved on the arctic pack after 48 days spent sheltering in the famous Red Tent, which he designed.
Biography
Trojani told his life in his book "Minosse's tail", dedicated to his passion for flying. Trojani's interest began at the age of 11, on 24 May 1908, watching Léon Delagrange's first attempt to fly in Rome, from the opposite bank of the Tiber. The book goes on to describe his youth in Rome, his high school studies at the high school gymnasium Torquato Tasso (Rome), the entrance to the Application School of Engineers of San Pietro in Vincoli in Rome, the call to arms as Aspirant in the Great War and subsequent imprisonment in Germany.
When he returned from prison, Felice Trojani resumed his studies in engineering and found employment at CNA. He traveled to Japan in January 1927 to follow Umberto Nobile for mounting the N-3 airship, built at Rome's workshops for the Japanese Imperial Navy; In Japan Umberto Nobile asked Felice Trojani to collaborate on the arrangement of the Norge airship expedition to the North Pole.
He participated in the design and construction of the Littorio Airport in Rome. It was renamed in 1927 by Umberto Nobile, first of all crew members, to contribute to the creation and to participate in the airship Italy, which collaborated with designing and assembly, to the North Pole.
Upon returning from the Soviet Union, he became the technical director of Foligno Aeronautica Umbra SA (AUSA), where he designed the AUSA AUT 18 and AUT 45 aircraft. During the Second World War, he worked in Rome as Engineer of the Castelli Company at the Vatican City.
At the end of World War II he emigrated to São Paulo, where he opened a precision mechanics industry. The only survivors of the airship Italy never to have told publicly that version, even for the ban (ignored by everyone else). He was contacted in 1960 by the American psychiatrist George Simmons, at that time looking for information for its volume Target: Arctic, In which he analyzes the psychology of participants in the trip to the North Pole. Simmons convinced Felic Trojani to finally write his version, and to tell all the prodromes and consequences on his life of participation in the expedition. The Tail of Minos became the tale of half a century of aeronautics in Italy, from the dawn until the Second World War.
Publications
Felice Trojani, The Tail of Minosse: A Man's Life, A Business History , IX Edition, Milan 2007, Ugo Mursia.
Felice Trojani, Last Flight , IV Edition, Milan 2008, Ugo Mursia.
Felice Trojani, Roald Engelbert Amundsen - The Hero of Polar Ice , Milan 1971, Ugo Mursia
Felice Trojani, The Queen of Tuxar , Milan 1970, Ugo Mursia
Felice Trojani, The Aviation novel , Milan 1969, Ugo Mursia - Rome 2012, Lulu [1]
Felice Trojani, Lessons Learned Airplane Pilots , CNA Cerveteri 1924
Felice Trojani, Opening of a new airport in Rome for airplanes and seaplanes , IV International Congress of Air Navigation, Rome October 1927 [2]
Giorgio Evangelisti, a good and adventurous designer: Felice Trojani , in Air People , vol IV, p. 298, Editorial Olimpia, Florence 1996
Giuseppe Ciampaglia, "Arturo Mercanti, an extraordinary precursor of cycling in automobile and aviation" IBN publisher, Rome 2014.
Giuseppe Ciampaglia, "The planes and engines of Giovanni Bonmartini's National Aeronautical Company". IBN publisher, Rome 2012. .
Giuseppe Ciampaglia, "Happy Trojani a Roman to the North Pole" Strenna dei Romanisti 2015. Roma 2015, Editrice Roma Amor.
References
1897 births
1971 deaths
Engineers from Rome
20th-century Italian engineers |
4381377 | https://en.wikipedia.org/wiki/Remote%20Initial%20Program%20Load | Remote Initial Program Load | Remote Initial Program Load (RIPL or RPL) is a protocol for starting a computer and loading its operating system from a server via a network. Such a server runs a network operating system such as LAN Manager, LAN Server, Windows NT Server, Novell NetWare, LANtastic, Solaris or Linux.
RIPL is similar to Preboot Execution Environment (PXE), but it uses the Novell NetWare-based boot method. It was originally developed by IBM.
IBM LAN Server
IBM LAN Server enables clients (RIPL requesters) to load the operating systems DOS or OS/2 via the 802.2/DLC-protocol from the LAN (often Token Ring). Therefore, the server compares the clients' requests with entries in its RPL.MAP table. Remote booting DOS workstations via boot images was supported as early as 1990 by IBM LAN Server 1.2 via its PCDOSRPL protocol. IBM LAN Server 2.0 introduced remote booting of OS/2 stations (since OS/2 1.30.1) in 1992.
RPL and DOS
For DOS remote boot to work, the RPL boot loader is loaded into the client's memory over the network before the operating system starts. Without special precautions the operating system could easily overwrite the RPL code during boot, since the RPL code resides in unallocated memory (typically at the top of the available conventional memory). The RPL code hides and thereby protects itself from being overwritten by hooking INT 12h and reducing the memory reported by this BIOS service by its own size. INT 12h is used by DOS to query the amount of available memory when initializing its own real-mode memory allocation scheme. This causes problems on more modern DOS systems, where free real-mode address ranges may be utilized by the operating system in order to relocate parts of itself and load drivers high, so that the amount of available conventional memory is maximized. Typically, various operating system vendor and version specific "dirty tricks" had to be used by the RPL code in order to survive this very dynamic boot process and let DOS regain control over the memory occupied by RPL once the boot is complete in a seamless manner.
Since MS-DOS/PC DOS 5.0 and DR DOS 6.0, the operating system checks if the RPL has hooked INT 2Fh by looking for a "RPL" signature at the code pointed to by INT 2Fh. If present, DOS calls INT 2Fh/AX=4A06h to retrieve the amount of memory from the RPL and integrate it into its own memory allocation, thereby protecting the RPL code from being overwritten by other programs. Still, it remained the RPL's difficult responsibility to cleanly remove itself from memory at the end of the boot phase, if possible.
RPLOADER and DR-DOS
In addition to this "RPL" interface, DR DOS 6.0 and higher since 1991 support a more flexible extension named "RPLOADER". If DR DOS detects the presence of RPLOADER rather than RPL only, it starts to issue INT 2F/AX=12FFh/BX=0005h broadcasts at certain critical stages in the boot process. The RPL code can use them to relocate itself in memory (in order to avoid conflicts with other resident software or to avoid memory fragmentation when the RPL memory is freed later on), or to hook into and better integrate with the operating system in order to perform its final cleanup tasks in a well-defined and coordinated manner through a robust and supported backend interface rather than mere hacks. This helps to improve compatibility without having to adapt the RPL code with each new version of the operating system, and it avoids unnecessary memory fragmentation and thereby increases available memory for DOS programs to run. The interface can also be utilized to run DR DOS as a task under a host operating system such as Concurrent DOS.
Since 2018, RxDOS 7.24 supports the "RPLOADER" broadcasts as well.
See also
Initial Program Load
Network booting
PROTMAN$ (Protocol Manager from Microsoft LAN Manager)
Self-relocation
Self-replication
NetWare DOS Requester
NetWare Client 32 for DOS/Windows
References
Further reading
GG24-3671-00: IBM Personal System/2 Advanced Server Planning Guide (IBM Redbook)
Network booting |
938833 | https://en.wikipedia.org/wiki/Multi-agent%20system | Multi-agent system | A multi-agent system (MAS or "self-organized system") is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.
Despite considerable overlap, a multi-agent system is not always the same as an agent-based model (ABM). The goal of an ABM is to search for explanatory insight into the collective behavior of agents (which don't necessarily need to be "intelligent") obeying simple rules, typically in natural systems, rather than in solving specific practical or engineering problems. The terminology of ABM tends to be used more often in the science, and MAS in engineering and technology. Applications where multi-agent systems research may deliver an appropriate approach include online trading, disaster response, target surveillance and social structure modelling.
Concept
Multi-agent systems consist of agents and their environment. Typically multi-agent systems research refers to software agents. However, the agents in a multi-agent system could equally well be robots, humans or human teams. A multi-agent system may contain combined human-agent teams.
Agents can be divided into types spanning simple to complex. Categories include:
Passive agents or "agent without goals" (such as obstacle, apple or key in any simple simulation)
Active agents with simple goals (like birds in flocking, or wolf–sheep in prey-predator model)
Cognitive agents (complex calculations)
Agent environments can be divided into:
Virtual
Discrete
Continuous
Agent environments can also be organized according to properties such as accessibility (whether it is possible to gather complete information about the environment), determinism (whether an action causes a definite effect), dynamics (how many entities influence the environment in the moment), discreteness (whether the number of possible actions in the environment is finite), episodicity (whether agent actions in certain time periods influence other periods), and dimensionality (whether spatial characteristics are important factors of the environment and the agent considers space in its decision making). Agent actions are typically mediated via an appropriate middleware. This middleware offers a first-class design abstraction for multi-agent systems, providing means to govern resource access and agent coordination.
Characteristics
The agents in a multi-agent system have several important characteristics:
Autonomy: agents at least partially independent, self-aware, autonomous
Local views: no agent has a full global view, or the system is too complex for an agent to exploit such knowledge
Decentralization: no agent is designated as controlling (or the system is effectively reduced to a monolithic system)
Self-organisation and self-direction
Multi-agent systems can manifest self-organisation as well as self-direction and other control paradigms and related complex behaviors even when the individual strategies of all their agents are simple. When agents can share knowledge using any agreed language, within the constraints of the system's communication protocol, the approach may lead to a common improvement. Example languages are Knowledge Query Manipulation Language (KQML) or Agent Communication Language (ACL).
System paradigms
Many MAS are implemented in computer simulations, stepping the system through discrete "time steps". The MAS components communicate typically using a weighted request matrix, e.g.
Speed-VERY_IMPORTANT: min=45 mph,
Path length-MEDIUM_IMPORTANCE: max=60 expectedMax=40,
Max-Weight-UNIMPORTANT
Contract Priority-REGULAR
and a weighted response matrix, e.g.
Speed-min:50 but only if weather sunny,
Path length:25 for sunny / 46 for rainy
Contract Priority-REGULAR
note – ambulance will override this priority and you'll have to wait
A challenge-response-contract scheme is common in MAS systems, where
First a "Who can?" question is distributed.
Only the relevant components respond: "I can, at this price".
Finally, a contract is set up, usually in several short communication steps between sides,
also considering other components, evolving "contracts" and the restriction sets of the component algorithms.
Another paradigm commonly used with MAS is the "pheromone", where components leave information for other nearby components. These pheromones may evaporate/concentrate with time, that is their values may decrease (or increase).
Properties
MAS tend to find the best solution for their problems without intervention. There is high similarity here to physical phenomena, such as energy minimizing, where physical objects tend to reach the lowest energy possible within the physically constrained world. For example: many of the cars entering a metropolis in the morning will be available for leaving that same metropolis in the evening.
The systems also tend to prevent propagation of faults, self-recover and be fault tolerant, mainly due to the redundancy of components.
Research
The study of multi-agent systems is "concerned with the development and analysis of sophisticated AI problem-solving and control architectures for both single-agent and multiple-agent systems." Research topics include:
agent-oriented software engineering
beliefs, desires, and intentions (BDI)
cooperation and coordination
distributed constraint optimization (DCOPs)
organization
communication
negotiation
distributed problem solving
multi-agent learning
agent mining
scientific communities (e.g., on biological flocking, language evolution, and economics)
dependability and fault-tolerance
robotics, multi-robot systems (MRS), robotic clusters
Frameworks
Frameworks have emerged that implement common standards (such as the FIPA and OMG MASIF standards). These frameworks e.g. JADE, save time and aid in the standardization of MAS development.
Currently though, no standard is actively maintained from FIPA or OMG. Efforts for further development of software agents in industrial context are carried out in IEEE IES technical committee on Industrial Agents.
Applications
MAS have not only been applied in academic research, but also in industry. MAS are applied in the real world to graphical applications such as computer games. Agent systems have been used in films. It is widely advocated for use in networking and mobile technologies, to achieve automatic and dynamic load balancing, high scalability and self-healing networks. They are being used for coordinated defence systems.
Other applications include transportation, logistics, graphics, manufacturing, power system, smartgrids and GIS.
Also, Multi-agent Systems Artificial Intelligence (MAAI) are used for simulating societies, the purpose thereof being helpful in the fields of climate, energy, epidemiology, conflict management, child abuse, .... Some organisations working on using multi-agent system models include Center for Modelling Social Systems, Centre for Research in Social Simulation, Centre for Policy Modelling, Society for Modelling and Simulation International. Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior.
See also
Comparison of agent-based modeling software
Agent-based computational economics (ACE)
Artificial brain
Artificial intelligence
Artificial life
Artificial life framework
AI mayor
Black box
Blackboard system
Complex systems
Discrete event simulation
Distributed artificial intelligence
Emergence
Evolutionary computation
Game theory
Human-based genetic algorithm
Knowledge Query and Manipulation Language (KQML)
Microbial intelligence
Multi-agent planning
Pattern-oriented modeling
PlatBox Project
Reinforcement learning
Scientific community metaphor
Self-reconfiguring modular robot
Simulated reality
Social simulation
Software agent
Swarm intelligence
Swarm robotics
References
Further reading
The Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS)
Whitestein Series in Software Agent Technologies and Autonomic Computing, published by Springer Science+Business Media Group
Cao, Longbing, Gorodetsky, Vladimir, Mitkas, Pericles A. (2009). Agent Mining: The Synergy of Agents and Data Mining, IEEE Intelligent Systems, vol. 24, no. 3, 64-72.
Artificial intelligence
Multi-robot systems |
57951641 | https://en.wikipedia.org/wiki/VR%20Systems | VR Systems | VR Systems is a provider of elections technology systems and software. VR Systems is based in Tallahassee, Florida. The company's products are used in elections in eight U.S. states. The company was founded in 1992 by Jane and David Watson. The CEO and President is Mindy Perkins.
History
VR Systems was founded in Florida in 1992 and grew its voter registration system, VoterFocus, in the years following the passage of the Help America Vote Act in 2002. In 2004, in response to devastation caused by Hurricane Charley in South Florida, VR created the EViD electronic pollbook designed to check in voters at central locations as many of the precincts in the area had been destroyed. VR Systems attends the United States Department of Homeland Security executive committee and initiates a cybersecurity communications education program, revealing their working process with election administrators nationwide. Today, all counties in Florida use VR products. In 2010, VR became a 100% employee-owned company.
Russian hacking controversy
VR Systems was reportedly targeted by operatives of the Russian Main Intelligence Directorate (GRU) in and around August 2016. Russian actors also attempted to impersonate VR Systems by creating a false email address as part of a spear phishing campaign targeting state electoral officials. There are no reports that the spear phishing campaign was successful. VR Systems insisted that none of its employees fell for the Russian phishing scam and that none of its systems were hacked. The investigation by North Carolina has been proved inconclusive.
Timeline of Russian hacking controversy
On August 24th, 2016, Russian hackers sent phishing emails to VR Systems.
On August 30th, VR Systems experienced an election-reporting mishap during the state primary in Broward County, Florida, when preliminary vote totals were posted live before the election ended.
On September 30th, the Federal Bureau of Investigation (FBI) held a conference call with Florida election officials warning them to look out for suspicious activity coming from specific IP addresses. VR Systems, which is on the call, discovers activity from the IP addresses and notifies the FBI. On October 31st, Russian hackers sent spear-phishing emails from a fake account to more than 120 election officials in Florida, North Carolina, and other states. If opened, the documents attached to the email would invisibly download a malware package that could have provided the attacker with remote control over a target’s computer. VR Systems immediately alerted its clients and notified the FBI, but the company could not fully estimate the scope of the attack.
On Election Day, November 8, Durham County, North Carolina, experienced problems with its VR Systems poll book software in five precincts. State officials immediately ordered Durham County to abandon the laptops in favor of paper printouts of the voter list to check in voters. However, the switch caused extensive delays at some precincts, leading an unknown number of voters to leave without casting ballots.
Products
VR Systems offers the EViD electronic pollbook, Voter Focus voter registration software, ELM online training and website services specifically designed for the elections community.
EViD electronic pollbook
The EViD electronic pollbook, short for Electronic Voter Identification, is available as a tablet, an all-in-one station or customized for an existing device. More than 14,000 EViDs were in use during the 2016 elections in eight U.S. states: California, Florida, Illinois, Indiana, North Carolina, New York, Virginia, and West Virginia. It was used in 23 of North Carolina’s 100 counties and in 64 of Florida’s 67 counties. The latter include Miami-Dade, the state’s most populous county. It is proceeded with four steps: 1. Plug in the activator, connect to any network wired or wireless. 2. Voters sign for their ballot at an EViD terminal and receives ballot ticket. 3. The voters take their ballot ticket to get their ballot. The custom ballot is printed through the DirectPrint printing option. 4. Voting data is processed automatically and device could be packed up.
Voter Focus
Voter Focus is an elections management solution before, during, and after election day. This solution organizes the election cycle, including voter registration, mail ballot delivery, precinct look-up, poll worker management, candidate requests. Voter Focus comes automatically with software updates, compliance updates and extensive built-in Q&A capabilities.
ELM
ELM is an online elections training platform designed specifically for election worker training. Each jurisdiction can create custom training by reusing existing media content or content from the ELM collaborative library.
VR Tower
VR Tower is designed for elections officials who would like a complete website solution with maintenance tools. The websites specifies in politician-voter communication.
External links
VR Systems Official website
References
1992 establishments in Florida
Companies based in Florida
Election technology companies |
59268894 | https://en.wikipedia.org/wiki/N.%20Asokan | N. Asokan | N. Asokan is a Professor of Computer Science and the David R. Cheriton Chair in Software Systems at the University of Waterloo’s David R. Cheriton School of Computer Science. He is also an Adjunct Professor in the Department of Computer Science at Aalto University.
Education and career
Asokan received a Bachelor of Technology (BTech) Honours in Computer Science & Engineering from the Indian Institute of Technology Kharagpur in 1988, a Master of Science (MS) in Computer and Information Science from Syracuse University in 1989, and a PhD in Computer Science from the University of Waterloo in 1998. His doctoral thesis was on the topic of Fairness in Electronic Commerce.
From 1999 to 2012 he was employed at Nokia Research Center (NRC) in Helsinki, Finland, where he worked on several notable projects, including contributions to the design of the numeric comparison protocol as part of the Bluetooth Secure Simple Pairing update, as well as what would become the Generic Bootstrapping Architecture.
From September 2012 until December 2017 he was a Professor of Computer Science at the University of Helsinki (part-time from August 2013 onwards). In 2013 he became a tenured (full) Professor of Computer Science at Aalto University, where he co-led the Secure Systems Group (SSG) and established the Helsinki-Aalto Center for Information Security (HAIC), since renamed to the Helsinki-Aalto Institute for Cybersecurity.
At Aalto University he led research projects funded by the Academy of Finland, Business Finland, and various companies. He was a principal investigator (PI) of the Intel Research Institute for Collaborative Resilient and Autonomous Systems (CARS).
In 2019 he joined the David R. Cheriton School of Computer Science at the University of Waterloo as a (full) Professor and a David R. Cheriton Chair in Software Systems.
Asokan is the inventor of over 50 granted patents.
Awards and recognition
Fellow of the Association for Computing Machinery for contributions to systems security and privacy, especially of mobile systems (2018)
Association for Computing Machinery Special Interest Group on Security, Audit and Control (SIGSAC) Outstanding Innovation Award for pioneering research on fair-exchange protocols, trusted device pairing and mobile trusted execution environments that has had widespread impact and led to large-scale deployment (2018)
Fellow of the Institute of Electrical and Electronics Engineers for contributions to system security and privacy (2017)
Association for Computing Machinery (ACM) Distinguished Scientist (2015)
Google Faculty Research Award in the field of security (2013)
Other contributions
Asokan was part of the team that translated the book Operaatio Elop (Operation Elop) from Finnish into English.
References
External links
N. Asokan at DBLP Computer Science Bibliography
N. Asokan at ACM Digital Library
Fellows of the Association for Computing Machinery
Computer security academics
Nokia people
Living people
University of Waterloo alumni
Fellow Members of the IEEE
Academics of the University of Helsinki
University of Waterloo faculty
Aalto University faculty
Year of birth missing (living people) |
18328362 | https://en.wikipedia.org/wiki/JAlbum | JAlbum | jAlbum is cross-platform photo website software for creating and uploading galleries from images and videos. jAlbum has built-in support for organizing and editing images, but with focus on flexible presentation. The resulting albums can be published on jalbum.net or on the user's own website. jAlbum software has been used to create over 32 million photo galleries, with over one million users. Majestic.com counts over 118 million backlinks to jalbum.net
Software
jAlbum is credited as being extremely easy to use, flexible and versatile.
−
It relies on the Java virtual machine, so can be run on most operating systems, and is available in 32 languages
The software allows users to manage their photo collection, sorting photos into albums, performing basic digital editing and commenting individual photos. The main focus is on producing HTML based galleries, for publishing online or distributing via other means. Users can customise the look and functionality of their photo galleries by using a small set of templates or skins that come with the program, or by choosing from dozens of skins available for download. Some are free, but others require a third party license. The community, that has formed around jAlbum produces a variety of creative skins, offering galleries based on standard HTML designs, AJAX slideshows and popular image viewers.
Jalbum was created by Swedish programmer David Ekholm in 2002.
License
New license model from version 13:
−
You can purchase
−
(1) A Standard license to use on any computer to produce non-commercial albums for display on your own website, with one year of free support and updates.
−
(2) A Pro license to use on any computer to produce commercial or non-commercial albums for display on your own website, with one year of free support and updates.
−
(3) An annual license with unlimited free upgrades for as long as you have an active paid account for 10GB (Premium account non-commercial) or 100GB (Power account - commercial) of storage space on jalbum.net.
Hosting service
The website jalbum.net is used by professionals and amateurs as a photo sharing website, as well as to promote and distribute the software. Users who register are offered a 30-day trial with 10 GB web space. Alternatively, a 10 GB "Premium" subscription or a 100 GB "Power" subscription are offered for a yearly subscription fee.
See also
Comparison of photo gallery software
Photo sharing
Image hosting service
References
External links
Official website
Image sharing websites |
2181191 | https://en.wikipedia.org/wiki/Link%20Control%20Protocol | Link Control Protocol | In computer networking, the Link Control Protocol (LCP) forms part of the Point-to-Point Protocol (PPP), within the family of Internet protocols. In setting up PPP communications, both the sending and receiving devices send out LCP packets to determine the standards of the ensuing data transmission.
The LCP protocol:
checks the identity of the linked device and either accepts or rejects the device
determines the acceptable packet size for transmission
searches for errors in configuration
can terminate the link if requirements exceed the parameters
Devices cannot use PPP to transmit data over a network until the LCP packet determines the acceptability of the link, but LCP packets are embedded into PPP packets and therefore a basic PPP connection has to be established before LCP can reconfigure it.
The LCP over PPP packets have control code 0xC021 and their info field contains the LCP packet, which has four fields (Code, ID, Length and Data).
Code: Operation requested: configure link, terminate link, ... and acknowledge and deny codes.
Data: Parameters for the operation.
External links
: PPP LCP Extensions
: The Point-to-Point Protocol (PPP)
: PPP Reliable Transmission
Link protocols
Internet Standards |
27756092 | https://en.wikipedia.org/wiki/Arts%2C%20Sciences%20and%20Technology%20University%20in%20Lebanon | Arts, Sciences and Technology University in Lebanon | Arts, Sciences and Technology University in Lebanon (AUL), is an independent and nondenominational Lebanese higher education institution with undergraduate and graduate degree programs.
The main campus of the university is in Cola, Beirut. AUL has a branch in Jadra, south of Beirut and five study centers in Sin el Fil, Qalamoun - North Lebanon, Kaslik, Dekwaneh and Chtoura.
History
AUL started under the name of “Business and Computer University College” (BCU) with two faculties but later expanded by adding the faculty of Arts and Humanities to the Business Administration, and Sciences and Fine Arts faculties thus gaining the university status and changed its name to meet the expansion of its major offerings.
In 2007, BCU changed to become Arts, Sciences and Technology University in Lebanon (AUL); including 3 major faculties:
Faculty of Business Administration.
Faculty of Sciences and Fine Arts.
Faculty of Arts and Humanities.
Accreditation and Recognitions
AUL is accredited by the Lebanese Ministry of Higher Education (2000) (MHE). That accreditation includes undergraduate and graduate levels.
AUL is a member of the European Council for Business Education(ECBE), Arab Organization for Admission & registration of universities in Arab Nation (ACRAO), International Council for Hotels, Restaurants & Institutional Education, and the Association of Collegiate Business Schools and Programs.
AUL Jadra Campus offers the American University for Humanities Degree Programs. An autonomous Program of Liberal Education within the university's global network. The program is fully accredited by the American Academy for Liberal Education (AALE), a national accrediting agency recognized by the US Secretary of Education.
Faculties
Faculty of Business Administration
B.A. in Accounting
B.A. in Banking and Finance
B.A. in Management
B.A. in Management Information Systems
B.A. in Marketing and Advertising
B.A. in Hospitality Management
B.A. in Events Management
B.A. in Travel Management
Master of Business Administration (MBA)
Masters of Executive Business Administration (EMBA)
Faculty of Sciences & Fine Arts
The Faculty of Sciences and Fine Arts houses 12 academic departments and programs in three separate divisions:
The Division of Computer Science & Engineering with the following departments:
BS. Computer Science.
BS. Information and Communication Technology (previously Computer Communication).
BSE. Computer Communication Engineering.
MS. Master of Science in Computer Science & Communication.
The Division of Fine Arts with the following departments:
BS. Graphic Design
BS. Interior Design
The Division of Basic Sciences with the following departments:
BS. Chemistry
BS. Physics
BS. Biology
BS. Environmental Sciences
BS. Mathematics
BS. Statistical Mathematics
BS. Actuarial Mathematics
Faculty of Arts & Humanities
B.A. in Anthropology
B.A. in Arabic Literature
B.A. in Communication Arts
B.A. in English Literature
B.A. in Performance Arts
B.A. in Religious Studies
B.A. in Sociology
Teaching Diploma
International Relations
MEMBERSHIPS:
Members in American Academy for Liberal Education (AALE)
European Council for business Education (ECBE)
Arab Organization for Admission & registration of universities in Arab Nation (ACRAO)
Intl. Council for Hotels, Restaurant & Institutional Education (I-CHRIE)
AFFILIATIONS:
Emporia State University USA
Cezar Ritz SWITZERLAND
Leeds Met. University U.K
Perpignan University FRANCE
FANCHAWE University CANADA
American University for Humanities USA
Ecole Nationale d’Ingénieurs de Brest (ENIB), France
Telecom Bretagne, France
Université de Grenoble, France
Wales University, School of Electronic Engineering, Bangor, UK
Wales University, School of Computer Science, Bangor, UK
University of Atlanta, USA, Atlanta
References
Universities in Lebanon
Education in Beirut
Educational institutions established in 1998
1998 establishments in Lebanon |
18949896 | https://en.wikipedia.org/wiki/Computer%20cluster | Computer cluster | A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.
The components of a cluster are usually connected to each other through fast local area networks, with each node (computer used as a server) running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware.
Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.
Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing. They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs.
Basic concepts
The desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations.
The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network. The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which also use many nodes, but with a far more distributed nature.
A computer cluster may be a simple two-node system which just connects two personal computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer. The developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost.
Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture.
History
Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law.
The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster.
The first production system designed as a cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation.
The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as the cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan (a circa 1994 high-availability product) and the IBM S/390 Parallel Sysplex (also circa 1994, primarily for business use).
Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K computer) relied on cluster architectures.
Attributes of clusters
Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use a high-availability approach, etc.
"Load-balancing" clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from a web-server cluster which may just use a simple round-robin method by assigning each new request to a different node.
Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach "supercomputing".
"High-availability clusters" (also known as failover clusters, or HA clusters) improve the availability of the cluster approach. They operate by having redundant nodes, which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux operating system.
Benefits
Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance (the ability for a system to continue working with a malfunctioning node) allows for scalability, and in high performance situations, low frequency of maintenance routines, resource consolidation (e.g. RAID), and centralized management. Advantages include enabling data recovery in the event of a disaster and providing parallel data processing and high processing capacity.
In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in the cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers.
When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node.
If you have a large number of computers clustered together, this lends itself to the use of distributed file systems and RAID, both of which can increase the reliability and speed of a cluster.
Design and configuration
One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares a dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing.
In a Beowulf cluster, the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling the scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization. The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have a large and shared file server that stores global persistent data, accessed by the slaves as needed.
A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel treecode, rather than general purpose scientific computations.
Due to the increasing computing power of each generation of game consoles, a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost).
Computer clusters have historically run on separate physical computers with the same operating system. With the advent of virtualization, the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar. The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA.
Data sharing and communication
Data sharing
As the computer clusters were appearing during the 1980s, so were supercomputers. One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory. To date clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it.
However, the use of a clustered file system is essential in modern computer clusters. Examples include the IBM General Parallel File System, Microsoft's Cluster Shared Volumes or the Oracle Cluster File System.
Message passing and communication
Two widely used approaches for communication between cluster nodes are MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).
PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides a run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc.
MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation. Rather than starting anew, the design of MPI drew on various features available in commercial systems of the time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as C, Fortran, Python, etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI.
Cluster management
One of the challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes. In some cases this provides an advantage to shared memory architectures with lower administration costs. This has also made virtual machines popular, due to the ease of administration.
Task scheduling
When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes a challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges. This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied.
Node failure management
When a node in a cluster fails, strategies such as "fencing" may be employed to keep the rest of the system operational. Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks.
The STONITH method stands for "Shoot The Other Node In The Head", meaning that the suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node.
The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3, fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to the GNBD server.
Software development and administration
Parallel programming
Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for a small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes.
Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors.
Debugging and monitoring
Developing and debugging parallel programs on a cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing.
The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in a database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters.
Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation. This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore the system to a stable state so that processing can resume without needing to recompute results.
Implementations
The Linux world supports various cluster software; for application clustering, there is distcc, and MPICH. Linux Virtual Server, Linux-HA - director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX, LinuxPMI, Kerrighed, OpenSSI are full-blown clusters integrated into the kernel that provide for automatic process migration among homogeneous nodes. OpenSSI, openMosix and Kerrighed are single-system image implementations.
Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for High Performance Computing like the Job Scheduler, MSMPI library and management tools.
gLite is a set of middleware technologies created by the Enabling Grids for E-sciencE (EGEE) project.
slurm is also used to schedule and manage some of the largest supercomputer clusters (see top500 list).
Other approaches
Although most computer clusters are permanent fixtures, attempts at flash mob computing have been made to build short-lived clusters for specific computations. However, larger-scale volunteer computing systems such as BOINC-based systems have had more followers.
See also
References
Further reading
External links
IEEE Technical Committee on Scalable Computing (TCSC)
Reliable Scalable Cluster Technology, IBM
Tivoli System Automation Wiki
Large-scale cluster management at Google with Borg, April 2015, by Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune and John Wilkes
Parallel computing
Concurrent computing
Computer cluster
Local area networks
Classes of computers
Fault-tolerant computer systems
Server hardware |
376795 | https://en.wikipedia.org/wiki/Flex%20%28lexical%20analyser%20generator%29 | Flex (lexical analyser generator) | Flex (fast lexical analyzer generator) is a free and open-source software alternative to lex.
It is a computer program that generates lexical analyzers (also known as "scanners" or "lexers").
It is frequently used as the lex implementation together with Berkeley Yacc parser generator on BSD-derived operating systems (as both lex and yacc are part of POSIX), or together with GNU bison (a version of yacc) in *BSD ports and in Linux distributions. Unlike Bison, flex is not part of the GNU Project and is not released under the GNU General Public License, although a manual for Flex was produced and published by the Free Software Foundation.
History
Flex was written in C around 1987 by Vern Paxson, with the help of many ideas and much inspiration from Van Jacobson. Original version by Jef Poskanzer. The fast table representation is a partial implementation of a design done by Van Jacobson. The implementation was done by Kevin Gong and Vern Paxson.
Example lexical analyzer
This is an example of a Flex scanner for the instructional programming language PL/0.
The tokens recognized are: '+', '-', '*', '/', '=', '(', ')', ',', ';', '.', ':=', '<', '<=', '<>', '>', '>=';
numbers: 0-9 {0-9}; identifiers: a-zA-Z {a-zA-Z0-9} and keywords: begin, call, const, do, end, if, odd, procedure, then, var, while.
%{
#include "y.tab.h"
%}
digit [0-9]
letter [a-zA-Z]
%%
"+" { return PLUS; }
"-" { return MINUS; }
"*" { return TIMES; }
"/" { return SLASH; }
"(" { return LPAREN; }
")" { return RPAREN; }
";" { return SEMICOLON; }
"," { return COMMA; }
"." { return PERIOD; }
":=" { return BECOMES; }
"=" { return EQL; }
"<>" { return NEQ; }
"<" { return LSS; }
">" { return GTR; }
"<=" { return LEQ; }
">=" { return GEQ; }
"begin" { return BEGINSYM; }
"call" { return CALLSYM; }
"const" { return CONSTSYM; }
"do" { return DOSYM; }
"end" { return ENDSYM; }
"if" { return IFSYM; }
"odd" { return ODDSYM; }
"procedure" { return PROCSYM; }
"then" { return THENSYM; }
"var" { return VARSYM; }
"while" { return WHILESYM; }
{letter}({letter}|{digit})* {
yylval.id = strdup(yytext);
return IDENT; }
{digit}+ { yylval.num = atoi(yytext);
return NUMBER; }
[ \t\n\r] /* skip whitespace */
. { printf("Unknown character [%c]\n",yytext[0]);
return UNKNOWN; }
%%
int yywrap(void){return 1;}
Internals
These programs perform character parsing and tokenizing via the use of a deterministic finite automaton (DFA). A DFA is a theoretical machine accepting regular languages. These machines are a subset of the collection of Turing machines. DFAs are equivalent to read-only right moving Turing machines. The syntax is based on the use of regular expressions. See also nondeterministic finite automaton.
Issues
Time complexity
A Flex lexical analyzer usually has time complexity in the length of the input. That is, it performs a constant number of operations for each input symbol. This constant is quite low: GCC generates 12 instructions for the DFA match loop. Note that the constant is independent of the length of the token, the length of the regular expression and the size of the DFA.
However, using the REJECT macro in a scanner with the potential to match extremely long tokens can cause Flex to generate a scanner with non-linear performance. This feature is optional. In this case, the programmer has explicitly told Flex to "go back and try again" after it has already matched some input. This will cause the DFA to backtrack to find other accept states. The REJECT feature is not enabled by default, and because of its performance implications its use is discouraged in the Flex manual.
Reentrancy
By default the scanner generated by Flex is not reentrant. This can cause serious problems for programs that use the generated scanner from different threads. To overcome this issue there are options that Flex provides in order to achieve reentrancy. A detailed description of these options can be found in the Flex manual.
Usage under non-Unix environments
Normally the generated scanner contains references to the unistd.h header file, which is Unix specific. To avoid generating code that includes unistd.h, %option nounistd should be used. Another issue is the call to isatty (a Unix library function), which can be found in the generated code. The %option never-interactive forces flex to generate code that does not use isatty.
Using flex from other languages
Flex can only generate code for C and C++. To use the scanner code generated by flex from other languages a language binding tool such as SWIG can be used.
Flex++
flex++ is a similar lexical scanner for C++ which is included as part of the flex package. The generated code does not depend on any runtime or external library except for a memory allocator (malloc or a user-supplied alternative) unless the input also depends on it. This can be useful in embedded and similar situations where traditional operating system or C runtime facilities may not be available.
The flex++ generated C++ scanner includes the header file FlexLexer.h, which defines the interfaces of the two C++ generated classes.
See also
Comparison of parser generators
Lex
yacc
GNU Bison
Berkeley Yacc
References
Further reading
M. E. Lesk and E. Schmidt, LEX - Lexical Analyzer Generator
Alfred Aho, Ravi Sethi and Jeffrey Ullman, Compilers: Principles, Techniques and Tools, Addison-Wesley (1986). Describes the pattern-matching techniques used by flex (deterministic finite automata)
External links
ANSI-C Lex Specification
JFlex: Fast Scanner Generator for Java
Brief description of Lex, Flex, YACC, and Bison
Free compilers and interpreters
Compiling tools
Free software programmed in C
Finite automata
Software using the BSD license
Lexical analysis |
426205 | https://en.wikipedia.org/wiki/Storm%20chasing | Storm chasing | Storm chasing is broadly defined as the deliberate pursuit of any severe weather phenomenon, regardless of motive, but most commonly for curiosity, adventure, scientific investigation, or for news or media coverage. A person who chases storms is known as a storm chaser or simply a chaser.
While witnessing a tornado is the single biggest objective for most chasers, many chase thunderstorms and delight in viewing cumulonimbus and related cloud structures, watching a barrage of hail and lightning, and seeing what skyscapes unfold. A smaller number of storm chasers attempt to intercept tropical cyclones and waterspouts.
Nature of and motivations for chasing
Storm chasing is chiefly a recreational endeavor, with chasers usually giving their motives as photographing or video recording a storm, or for various personal reasons. These can include the beauty of the views afforded by the sky and land, the mystery of not knowing precisely what will unfold, the journey to an undetermined destination on the open road, intangible experiences such as feeling one with a much larger and more powerful natural world, the challenge of correctly forecasting and intercepting storms with optimal vantage points, and pure thrill seeking. Pecuniary interests and competition may also be components; in contrast, camaraderie is common.
Although scientific work is sometimes cited as a goal, direct participation in such work is almost always impractical during the actual chase except for chasers collaborating in an organized university or government project. Many chasers also act as storm spotters, reporting their observations of hazardous weather to relevant authorities. These reports greatly benefit real-time warnings with ground truth information, as well as science as a whole by increasing the reliability of severe storm databases used in climatology and other research (which ultimately boosts forecast and warning skill). Additionally, many recreational chasers submit photos and videos to researchers as well as to the U.S. National Weather Service (NWS) for spotter training.
Storm chasers are not generally paid to chase, with the exception of television media crews in certain television market areas, video stringers and photographers (freelancers mostly, but some staff), and researchers such as graduate meteorologists and professors. An increasing number sell storm videos and pictures and manage to make a profit. A few operate "chase tour" services, making storm chasing a recently developed form of niche tourism. Financial returns usually are relatively meager given the expenses of chasing, with most chasers spending more than they take in and very few making a living solely from chasing. Chasers are also generally limited by the duration of the season in which severe storms are most likely to develop, usually the local spring and/or summer.
No degree or certification is required to be a storm chaser, and many chases are mounted independently by amateurs and enthusiasts without formal training. Local National Weather Service offices do hold storm spotter training classes, usually early in the spring. Some offices collaborate to produce severe weather workshops oriented toward operational meteorologists.
Storm chasers come from a wide variety of occupational and socioeconomic backgrounds. Though a fair number are professional meteorologists, most storm chasers are from other occupational fields, which may include any number of professions that have little or nothing to do with meteorology. A relatively high proportion possess college degrees and a large number live in the central and southern United States. Many are lovers of nature with interests that also include flora, fauna, geology, volcanoes, aurora, meteors, eclipses, and astronomy.
History
The first person to gain public recognition as a storm chaser was David Hoadley (born 1938), who began chasing North Dakota storms in 1956, systematically using data from area weather offices and airports. He is widely considered the pioneer storm chaser and was the founder and first editor of Storm Track magazine.
Neil B. Ward (1914–1972) subsequently brought research chasing to the forefront in the 1950s and 1960s, enlisting the help of the Oklahoma Highway Patrol to study storms. His work pioneered modern storm spotting and made institutional chasing a reality.
The first coordinated storm chasing activity sponsored by institutions was undertaken as part of the Alberta Hail Studies project beginning in 1969. Vehicles were outfitted with various meteorological instrumentation and hail-catching apparatus and were directed into suspected hail regions of thunderstorms by a controller at a radar site. The controller communicated with the vehicles by radio.
In 1972, the University of Oklahoma (OU) in cooperation with the National Severe Storms Laboratory (NSSL) began the Tornado Intercept Project, with the first outing taking place on 19 April of that year. This was the first large-scale tornado chasing activity sponsored by an institution. It culminated in a brilliant success in 1973 with the Union City, Oklahoma tornado providing a foundation for tornado and supercell morphology that proved the efficacy of storm chasing field research. The project produced the first legion of veteran storm chasers, with Hoadley's Storm Track magazine bringing the community together in 1977.
Storm chasing then reached popular culture in three major spurts: in 1978 with the broadcast of an episode of the television program In Search of...; in 1985 with a documentary on the PBS series Nova; and in May 1996 with the theatrical release of Twister, a Hollywood blockbuster which provided an action-packed but heavily fictionalized glimpse of the hobby. Further early exposure to storm chasing resulted from notable magazine articles, beginning in the late 1970s in Weatherwise magazine.
Various television programs and increased coverage of severe weather by the news media, especially since the initial video revolution in which VHS ownership became widespread by the early 1990s, substantially elevated awareness of and interest in storms and storm chasing. The Internet in particular has contributed to a significant increase in the number of storm chasers since the mid-to-late 1990s. A sharp increase in the general public impulsively wandering about their local area in search of tornadoes similarly is largely attributable to these factors. The 2007–2011 Discovery Channel reality series Storm Chasers produced another surge in activity. Over the years the nature of chasing and the characteristics of chasers shifted.
From their advent in the 1970s until the mid-1990s, scientific field projects were occasionally conducted in the Great Plains during the spring. The first of the seminal VORTEX projects occurred in 1994–1995 and was soon followed by various field experiments each spring, with another large project, VORTEX2, in 2009–2010. Since the mid-1990s, most storm chasing science, with the notable exception of large field projects, consists of mobile Doppler weather radar intercepts.
Typical storm chase
Chasing often involves driving thousands of miles in order to witness the relatively short window of time of active severe thunderstorms. It is not uncommon for a chaser to end up empty handed on any particular day. Storm chasers' degrees of involvement, competencies, philosophies, and techniques vary widely, but many chasers spend a significant amount of time forecasting, both before going on the road as well as during the chase, utilizing various sources for weather data. Most storm chasers are not meteorologists, and many chasers expend significant time and effort in learning meteorology and the intricacies of severe convective storm prediction through both study and experience.
Besides the copious driving to, from, and during chases, storm chasing is punctuated with contrasting periods of long waiting and ceaseless action. Downtime can consist of sitting under sun-baked skies for hours, playing pickup sports, evaluating data, or visiting landmarks while awaiting convective initiation. During an inactive pattern, this down time can persist for days. When storms are occurring, there is often little or no time to eat or relieve oneself and finding fuel can cause frustrating delays and detours. Navigating obstacles such as rivers and areas with inadequate road networks is a paramount concern. Only a handful of chasers decide to chase in Dixie Alley, an area of the Southern United States in which trees and road networks heavily obscure the storms and often large tornadoes. The combination of driving and waiting has been likened to "extreme sitting". A "bust" occurs when storms do not fire, sometimes referred to as "severe clear", when storms fire but are missed, when storms fire but are meager, or when storms fire after dusk.
Most chasing is accomplished by driving a motor vehicle of any make or model, whether it be a sedan, van, pickup truck, or SUV, however, a few individuals occasionally fly planes and television stations in some markets use helicopters. Research projects sometimes employ aircraft, as well.
Geographical, seasonal, and diurnal activity
Storm chasers are most active in the spring and early summer, particularly May and June, across the Great Plains of the United States (extending into Canada) in an area colloquially known as Tornado Alley, with many hundred individuals active on some days during this period. This coincides with the most consistent tornado days in the most desirable topography of the Great Plains. Not only are the most intense supercells common here, but due to the moisture profile of the atmosphere the storms tend to be more visible than locations farther east where there are also frequent severe thunderstorms. There is a tendency for chases earlier in the year to be farther south, shifting farther north with the jet stream as the season progresses. Storms occurring later in the year tend to be more isolated and slower moving, both of which are also desirable to chasers.
Chasers may operate whenever significant thunderstorm activity is occurring, whatever the date. This most commonly includes more sporadic activity occurring in warmer months of the year bounding the spring maximum, such as the active month of April and to a lesser extent March. The focus in the summer months is the Central or Northern Plains states and the Prairie Provinces, the Upper Midwest, or on to just east of the Colorado Front Range. An annually inconsistent and substantially smaller peak of severe thunderstorm and tornado activity also arises in the transitional months of autumn, particularly October and November. This follows a pattern somewhat the reverse of the spring pattern with the focus beginning in the north then dropping south and with an overall eastward shift. In the area with the most consistent significant tornado activity, the Southern Plains, the tornado season is intense but is relatively brief whereas central to northern and eastern areas experience less intense and consistent activity that is diffused over a longer span of the year.
Advancing technology since the mid-2000s led to chasers more commonly targeting less amenable areas (i.e. hilly or forested) that were previously eschewed when continuous wide visibility was critical. These advancements, particularly in-vehicle weather data such as radar, also led to an increase in chasing after nightfall. Most chasing remains during daylight hours with active storm intercepting peaking from mid-late afternoon through early-to-mid evening. This is dictated by a chaser's schedule (availability to chase) and by when storms form, which usually is around peak heating during the mid-to-late afternoon but on some days occurs in early afternoon or even in the morning. An additional advantage of later season storms is that days are considerably longer than in early spring. Morning or early afternoon storms tend to be associated with stronger wind shear and thus most often happen earlier in the spring season or later during the fall season.
Some organized chasing efforts have also begun in the Top End of the Northern Territory and in southeastern Australia, with the biggest successes in November and December. A handful of individuals are also known to be chasing in other countries, including the United Kingdom, Israel, Italy, Spain, France, Belgium, the Netherlands, Finland, Germany, Austria, Switzerland, Poland, Bulgaria, Slovenia, Hungary, the Czech Republic, Slovakia, Estonia, Argentina, South Africa, Bangladesh, and New Zealand; although many people trek to the Great Plains of North America from these and other countries around the world (especially from the UK). The number of chasers and number countries where chasers are active expanded at an accelerating pace in Europe from the 1990s–2010s.
Dangers
There are inherent dangers involved in pursuing hazardous weather. These range from lightning, tornadoes, large hail, flooding, hazardous road conditions (rain or hail-covered roadways), animals on the roadway, downed power lines (and occasionally other debris), reduced visibility from heavy rain (often wind blown), blowing dust, and hail fog. Most directly weather-related hazards such as from a tornado are minimized if the storm chaser is knowledgeable and cautious. In some situations severe downburst winds may push automobiles around, especially high-profile vehicles. Tornadoes affect a relatively small area and are predictable enough to be avoided if sustaining situational awareness and following strategies including always having an open escape route, maintaining a safe distance, and avoiding placement in the direction of travel of a tornado (in most cases in the Northern Hemisphere this is to the north and to the east of a tornado). Lightning, however, is an unavoidable hazard. "Core punching", storm chaser slang for driving through a heavy precipitation core to intercept the area of interest within a storm, is recognized as hazardous due to reduced visibility and because many tornadoes are rain-wrapped. The "bear's cage" refers to the area under a rotating wall cloud (and any attendant tornadoes), which is the "bear", and to the blinding precipitation (which can include window-shatteringly large hail) surrounding some or all sides of a tornado, which is the "cage". Similarly, chasing at night heightens risk due to darkness.
In reality, the most significant hazard is driving, which is made more dangerous by the severe weather. Adding still more to this hazard are the multiple distractions which can compete for a chaser's attention, such as driving, communicating with chase partners and others with a phone and/or radio, navigating, watching the sky, checking weather data, and shooting photos or video. Again here, prudence is key to minimizing the risk. Chasers ideally work to prevent the driver from multitasking either by chase partners covering the other aspects or by the driver pulling over to do these other things if he or she is chasing alone. Falling asleep while driving is a chase hazard, especially on long trips back. This also is exacerbated by nocturnal darkness and by the defatigating demands of driving through precipitation and on slick roads.
Incidents
For nearly 60 years, the only known chaser deaths were driving-related. The first was Christopher Phillips, a University of Oklahoma undergraduate student, killed in a hydroplaning accident when swerving to miss a rabbit in 1984. Three other incidents occurred when Jeff Wear was driving home from a hurricane chase in 2005, when Fabian Guerra swerved to miss a deer while driving to a chase in 2009, and when a wrong-way driver resulted in a head-on collision that killed Andy Gabrielson who was returning from a chase in 2012. On May 31, 2013, an extreme event led to the first known chaser deaths inflicted directly by weather when the widest tornado ever recorded struck near El Reno, Oklahoma. Engineer Tim Samaras, his photographer son Paul, and meteorologist Carl Young were killed doing in situ probe and infrasonic field research by an exceptional combination of events in which an already large and rain-obscured tornado swelled within less than a minute to wide simultaneously as it changed direction and accelerated. Several other chasers were also struck and some injured by this tornado and its parent supercell's rear flank downdraft. Three chasers were killed, two in one vehicle and one in another, when their vehicles collided in West Texas in 2017. The most recent chasing related fatality was on the morning of June 20, 2019. Dale Sharpe, an Australian, struck a deer and subsequently became disabled. As he stepped out of the vehicle, an oncoming vehicle struck him and he later died at the hospital. There are other incidents in which chasers were injured by automobile accidents, lightning strikes, and tornado impacts. While chasing a tornado outbreak on 13 March 1990, KWTV television photographer Bill Merickel was shot and injured near Lindsay, Oklahoma.
Equipment
Storm chasers vary with regards to the amount of equipment used, some prefer a minimalist approach; for example, where only basic photographic equipment is taken on a chase, while others use everything from satellite-based tracking systems and live data feeds to vehicle-mounted weather stations and hail guards.
Historic
Historically, storm chasing relied on either in-field analysis or in some cases nowcasts from trained observers and forecasters. The first in-field technology consisted of radio gear for communication. Much of this equipment could also be adapted to receive radiofax data which was useful for receiving basic observational and analysis data. The primary users of such technology were university or government research groups who often had larger budgets than individual chasers.
Radio scanners were also heavily used to listen in on emergency services and storm spotters so as to determine where the most active or dangerous weather was located. A number of chasers were also radio amateurs, and used mobile (or portable) amateur radio to communicate directly with spotters and other chasers, allowing them to keep abreast of what they could not themselves see.
It was not until the mid- to late 1980s that the evolution of the laptop computer would begin to revolutionize storm chasing. Early on, some chasers carried acoustic couplers to download batches of raw surface and upper air data from payphones. The technology was too slow for graphical imagery such as radar and satellite data; and during the first years this wasn't available on any connection over telephone lines, anyway. Some raw data could be downloaded and plotted by software, such as surface weather observations using WeatherGraphix (predecessor to Digital Atmosphere) and similar software or for upper air soundings using SHARP, RAOB, and similar software.
Most meteorological data was acquired all at once early in the morning, and the rest of day's chasing was based on analysis and forecast gleaned from this; as well as on visual clues that presented themselves in the field throughout the day. Plotted weather maps were often analyzed by hand for manual diagnosis of meteorological patterns. Occasionally chasers would make stops at rural airstrips or NWS offices for an update on weather conditions. NOAA Weather Radio (NWR) could provide information in the vehicle, without stopping, such as weather watches and warnings, surface weather conditions, convective outlooks, and NWS radar summaries. Nowadays, storm chasers may use high-speed Internet access available in any library, even in small towns in the US. This data is available throughout the day, but one must find and stop at a location offering Internet access.
With the development of the mobile computers, the first computer mapping software became feasible, at about the same time as the popular adoption of the VHS camcorder began a rapid growth phase. Prior to the mid to late 1980s most motion picture equipment consisted of 8 mm film cameras. While the quality of the first VHS consumer cameras was quite poor (and the size somewhat cumbersome) when compared to traditional film formats, the amount of video which could be shot with a minimal amount of resources was much greater than any film format at the time.
In the 1980s and 1990s The Weather Channel (TWC) and A.M. Weather were popular with chasers, in the morning preceding a chase for the latter and both before and during a chase for the former. Commercial radio sometimes also provides weather and damage information. The 1990s brought technological leaps and bounds. With the swift development of solid state technology, television sets for example could be installed with ease in most vehicles allowing storm chasers to actively view local TV stations. Mobile phones became popular making group coordination easier when traditional radio communications methods were not ideal or for those possessing radios. The development of the World Wide Web (WWW) in 1993 hastened adoption of the Internet and led to FTP access to some of the first university weather sites.
The mid-1990s marked the development of smaller more efficient marine radars. While such marine radars are illegal if used in land-mobile situations, a number of chasers were quick to adopt them in an effort to have mobile radar. These radars have been found to interfere with research radars, such as the Doppler on Wheels (DOW) utilized in field projects. The first personal lightning detection and mapping devices also became available and the first online radar data was offered by private corporations or, at first with delays, with free services. A popular data vendor by the end of the 1990s was WeatherTAP.
Current
Chasers used paper maps for navigation and some of those now using GPS still use these as a backup or for strategizing with other chasers. Foldable state maps can be used but are cumbersome due to the multitude of states needed and only show major roads. National atlases allow more detail and all states are contained in a single book, with AAA favored and Rand McNally followed by Michelin also used. The preferred atlases due to great detail in rural areas are the "Roads of..." series originally by Shearer Publishing, which first included Texas but expanded to other states such as Oklahoma and Colorado. Covering every state of the union are the DeLorme "Atlas and Gazetteer" series. DeLorme also produced early GPS receivers that connected to laptops and for years was one of two major mapping software creators. DeLorme Street Atlas USA or Microsoft Streets & Trips were used by most chasers until their discontinuations in 2013. Chasers now use Google Maps or other web mapping as no suitable alternative mapping software emerged. GPS receivers may still be used with other software, such as for displaying radar data.
A major turning point was the advent of civilian GPS in 1996. At first, GPS units were very costly and only offered basic functions, but that would soon change. Towards the late 1990s the Internet was awash in weather data and free weather software, the first true cellular Internet modems for consumer use also emerged providing chasers access to data in the field without having to rely on a nowcaster. The NWS also released the first free, up-to-date NEXRAD Level 3 radar data. In conjunction with all of this, GPS units now had the ability to connect with computers, granting greater ease when navigating.
2001 marked the next great technological leap for storm chasers as the first Wi-Fi units began to emerge offering wireless broadband service in many cases for free. Some places (restaurants, motels, libraries, etc.) were known to reliably offer wireless access and wardriving located other availabilities. In 2002 the first Windows-based package to combine GPS positioning and Doppler radar appeared called SWIFT WX. SWIFT WX allowed storm chasers to seamlessly position themselves accurately relative to tornadic storms.
In 2004 two more storm chaser tools emerged. The first, WxWorx, was a new XM Satellite Radio based system utilizing a special receiver and Baron Services weather software. Unlike preexisting cellular based services there was no risk of dead spots, and that meant that even in the most remote areas storm chasers still had a live data feed. The second tool was a new piece of software called GRLevel3. GRLevel3 utilized both free and subscription based raw radar files, displaying the data in a true vector format with GIS layering abilities. Since 2006 a growing number of chasers are using Spotter Network (SN), which uses GPS data to plot real time position of participating spotters and chasers, and allows observers to report significant weather as well as GIS layering for navigation maps, weather products, and the like.
The most common chaser communications device is the cellular phone. These are used for both voice and data connections. External antennas and amplifiers may be used to boost signal transception. It is not uncommon that chasers travel in small groups of cars, and they may use CB radio (declining in use) or inexpensive GMRS / FRS hand-held transceivers for inter-vehicle communication. More commonly, many chasers are also ham radio operators and use the 2 meters VHF and, less often, 70 cm UHF bands to communicate between vehicles or with Skywarn / Canwarn spotter networks. Scanners are often used to monitor spotter, sometimes public safety communications, and can double as weather radios. Since the mid-2000s social networking services may also be used, with Twitter most used for ongoing events, Facebook for sharing images and discussing chase reports, and Instagram trailing in adoption. Social networking services largely (but not completely) replace forums and email lists, which complemented and eventually supplanted Stormtrack magazine, for conversing about storms.
In-field environmental data is still popular among some storm chasers, especially temperature, moisture, and wind speed and direction data. Many have chosen to mount weather stations atop their vehicles. Others use handheld anemometers. Rulers or baseballs may be brought along for measuring hail and for showing as a comparison object. Vehicle mounted cameras, such as on the roof or more commonly on the dash, provide continuous visual recording capability.
Chasers heavily utilized still photography since the beginning. Videography gained prominence by the 1990s into the early 2000s but a resurgence of photography occurred with the advent of affordable and versatile digital SLR (DSLR) cameras. Prior to this, 35 mm SLR print and slide film formats were mostly used, along with some medium format cameras. In the late 2000s, mobile phone 3G data networks became fast enough to allow live streaming video from chasers using webcams. This live imagery is frequently used by the media, as well as NWS meteorologists, emergency managers, and the general public for direct ground truth information, and it promotes video sales opportunities for chasers. Also by this time, camcorders using memory cards to record video began to be adopted. Digital video had been around for years but was recorded on tape, whereas solid-state is random access rather than sequential access (linear) and has no moving parts. Late in the 2000s HD video began to overtake SD (which had been NTSC in North America) in usage as prices came down and performance increased (initially there were low-light and sporadic aliasing problems due to chip and sensor limitations). By the mid-2010s 4K cameras were increasingly in use. Tripods are used by those seeking crisp professional photo and video imagery and also enable chasers to tend to other activities. Other accessories include cable/remote shutter releases, lightning triggers, and lens filters. Windshield mounted cameras or dome enclosed cameras atop vehicle roofs may also be used, and a few chasers use UAVs ("drones").
Late in the 2000s smartphones increased in usage, with radar viewing applications frequently used. Particularly, RadarScope on the iOS and Android platforms is favored. Pkyl3 was a dominant early choice on Android devices which discontinued development in August 2018. Other apps may be used as are browsers for viewing meteorological data and accessing social networking services. Some handsets can be used as WiFi hotspots and wireless cards may also be used to avoid committing a handset to tethering or operating as a hotspot. Some hotspots operate as mobile broadband MNVO devices using any radio spectrum that is both available and is in contract with a service provider. Such devices may expand mobile data range beyond a single carrier's service area and typically can work on month-to-month contracts. Adoption of tablet computers expanded by the early 2010s. 4G LTE has been adopted when available and can be especially useful for uploading HD video. A gradual uptick of those selecting mirrorless interchangeable-lens cameras (MILCs) began in the mid-2010s. Usage of DSLR for video capture, called HDSLR, is common, although HD camcorders remain popular due to their greater functionality (many chasers still shoot both).
Chasers also carry common travel articles and vehicle maintenance items, and sometimes first aid kits. Full sized spare tires are strongly preferable to "donut" emergency replacement tires. Power inverters (often with surged protected power strips) power devices that require AC (indoor/wall outlet) power, although some devices may be powered directly with DC (battery power) from the vehicle electric system. Water repellent products, such as Rain-X or Aquapel, are frequently applied to windshields to dispel water when driving as well as mud and small detritus, which boosts visibility and image clarity on photographs and videos shot through glass (which is particularly problematic if autofocus is on). Binoculars and sunglasses are commonly employed.
Ethics
A growing number of experienced storm chasers advocate the adoption of a code of ethics in storm chasing featuring safety, courtesy, and objectivity as the backbone. Storm chasing is a highly visible recreational activity (which is also associated with science) that is vulnerable to sensationalist media promotion. Veteran storm chasers Chuck Doswell and Roger Edwards deemed reckless storm chasers as "yahoos". Doswell and Edwards believe poor chasing ethics at TV news stations add to the growth of "yahoo" storm chasing. A large lawsuit was filed against the parent company of The Weather Channel in March 2019 for allegedly keeping on contract storm chaser drivers with a demonstrated pattern of reckless driving which ultimately led in a fatal collision (killing themselves and a storm spotter in the other vehicle) when running a stop sign in Texas in 2017. Edwards and Rich Thompson, among others, also expressed concern about pernicious effects of media profiteering with Matt Crowther, among others, agreeing in principle but viewing sales as not inherently corrupting. Self-policing is seen as the means to mold the hobby. There is occasional discussion among chasers that at some point government regulation may be imposed due to increasing numbers of chasers and because of poor behavior by some individuals; however, many chasers do not expect this eventuality and almost all oppose regulations -—as do some formal studies of dangerous leisure activities which advocate deliberative self-policing.
As there is for storm chaser conduct, there is concern about chaser responsibility. Since some chasers are trained in first aid and even first responder procedures, it is not uncommon for tornado chasers to be first on a scene and tending to storm victims or treating injuries at the site of a disaster in advance of emergency personnel and other outside aid.
Aside from questions concerning their ethical values and conduct, many have been accredited for giving back to the community in several ways. Just before the Joplin tornado, Storm Chaser Jeff Piotrowski provided advanced warning to Officer Brewer of Joplin local law enforcement, prompting them to activate the emergency sirens. Though lives were lost, many who survived accredited their survival to the siren. After a storm has passed storm chasers are often the first to arrive on the scene to help assist in the aftermath. An unexpected and yet increasingly more common result of storm chasers is the data they provide to storm research from their videos, social video posts and documentation of storms they encounter. After the El Reno tornado in 2013 portals were created for chasers to submit their information to help in the research of the deadly storm.
In popular culture
Twister, a 1996 film starring Helen Hunt and Bill Paxton
Into the Storm, a 2014 film
Heavy Weather, a novel
See also
53rd Weather Reconnaissance Squadron
Eclipse chasing
Landscape photography
NOAA Hurricane Hunters
Weather spotting
References
Further reading
External links
The Meaning of Chasing (T.J. Turnage)
National Storm Chaser Convention
Storm Chasing History Anthology
Storm Spotters Guides: Chasing
Observation hobbies
Research methods |
23470801 | https://en.wikipedia.org/wiki/Norton%20Family | Norton Family | Norton Family (previously known as Online Family.Norton and Norton Online Family) is an American cloud-based parental control service. Norton Family is aimed at "fostering communication" involving parents and their children's online activities. Computer activities are monitored by the software client, and reports are published online.
Development
Symantec debuted a beta version of Online Family on February 17, 2009. Its debut coincided with Symantec's announcement of the Norton Online Family Advisory Council, a committee of experts in various child care fields who will test and provide insight on the beta. Citing a Rochester Institute of Technology study, the company intends to bridge the gap between the percentage of parents versus children who report no online supervision.
The software was renamed to OnlineFamily.Norton and released April 27, 2009, dubbed by Symantec as the Internet Safety Week. The service, valued at $60, will remain free indefinitely. Instituting a fee for the product has not been decided on. It was initially planned to become part of the upcoming Norton Internet Security, but has not yet been incorporated into Norton's security suite.
On 26 January 2018, Norton Family emailed all its users alerting an update of its pricing and the incorporation with Norton Security Premium. Norton Family declared it shall no longer provide free service and existing users were given 180 days to either purchase the Norton Family Premier or Norton Security Premium subscriptions.
Overview
Norton Family can monitor Internet, instant messenger, and social-networking sites' traffic. On shared computers, it depends on the Norton Safety Minder to enforce policies and report activities for individual accounts. Norton Family emphasizes transparency between parents and children, attempting to create "open" and "ongoing dialogue". A system tray icon is intended to make the software's presence known. Also, as a security company, Symantec decided not to introduce something spyware-like. The service integrates with the Transport Driver Interface, allowing it to control Internet access for any Internet-enabled application. Attempts to bypass Safety Minder are logged.
Norton Family blocks specific sites using nearly four dozen categories. It preconfigures this feature based on a birth date. While Norton Family does not analyze pages in real-time, uncategorized sites are queued for heuristic and manual review. On attempt to visit a blocked site, children may receive a pop-up warning and an explanatory note in the browser with space to appeal the block. Otherwise, children will receive a warning, with the option of continuing to the page. Whenever a rule is ignored, it is logged. To make log files easier to parse, advertisement URLs are omitted from logs.
The service can define when children have access to a computer Parents can define a range of hours when children are blocked, with separate settings for each child, weekdays and weekends. A daily time quota can be configured as well. Children receive a warning 15 minutes prior before blocks begin or a time limit is exceeded. The amount of time left can be checked via the Norton Family system tray icon. In the last minute before forced logout, children can postpone it by pressing a button, disabling the desktop and leaving only the Norton Family icon functional. Parents can then enter their credentials to grant a time extension. The time-management feature can also warn children, rather than cutting off access. Exceeding limits will result in a log entry. Time limits are enforced across multiple PCs. Changing the system time does not affect Online Family. The activity will be logged, however.
The tracking of search queries requires a compatible search engine must be used. An option to force search engines to filter objectionable material is present, although Google Encrypted Search may bypass it. The service can block the transmission of personal information. Parents first complete a blacklist of information which should not be communicated via IM or a social-networking site. Children will be presented with a warning message when attempting to share information via IM or a social-networking site, and the information will not be sent. Norton Family can also notify parents when children access social-networking sites, create an account, or misrepresent their age. This feature is browser-dependent; Internet Explorer or Firefox is required in Windows, and Safari or Firefox is required in Mac. Without a compatible browser, Norton Family can still record access to social-networking sites as part of monitoring Internet activities. Some people may consider this Spyware.
IM control manages traffic at the protocol level, allowing it jurisdiction over a wide range of desktop clients. There are three levels of control; at the strictest, children cannot chat with friends until each one has received parental approval. Attempting to start or respond to a conversation triggers a warning and an offer to send a message to parents asking for permission. The second level allow all chat connections. However, parents are notified about chats with new friends and such conversations are recorded. Parents can choose to block the friend, keep monitoring, or allow unmonitored chat. At the loosest level, any IM clients used are listed and the friends which children engage in conversation are listed, with options to block or monitor each friend.
Norton Family can e-mail parents when certain events occur. Notifications include the noted event, the child's name, and the time of the incident. Notifications are stamped in Eastern Time. Parents can choose which events trigger a notification, add e-mail addresses to forward notices to, and grant other parents full privileges over children. Reports of activities are also presented in the online console. Settings can be changed and applied almost instantaneously. Children will receive a pop-up announcing the updated rules. Rules can be overridden immediately by a parent.
As an international company, Norton Family also adopts multi-language. On the other hand, web filter may not work properly if some antivirus software run web protection at the same time, for example Avira Antivirus web protection.
See also
List of parental control software
List of content-control software
Content-control software
Parental controls
References
External links
NortonLifeLock software
Internet safety
Proprietary software
Content-control software
Norton Family
Norton Family
Norton Family |
487054 | https://en.wikipedia.org/wiki/K3b | K3b | K3b (from KDE Burn Baby Burn) is a CD and DVD authoring application by KDE for Unix-like computer operating systems. It provides a graphical user interface to perform most CD/DVD burning tasks like creating an Audio CD from a set of audio files or copying a CD/DVD, as well as more advanced tasks such as burning eMoviX CD/DVDs. It can also perform direct disc-to-disc copies. The program has many default settings which can be customized by more experienced users. The actual disc recording in K3b is done by the command line utilities cdrecord or cdrkit, cdrdao, and growisofs. As of version 1.0, K3b features a built-in DVD ripper.
As is the case with most KDE applications, K3b is written in the C++ programming language and uses the Qt GUI toolkit. Released under the GNU General Public License, K3b is free software.
A first alpha of a KDE Platform 4 version of K3b was released on 22 April 2009, the second on 27 May 2009 and a third on 14 October 2009.
K3b is a software project that was started in 1998, and is one of the mainstays of the KDE desktop.
Features
Some of K3b's main features include:
Data CD/DVD burning
Audio CD burning
CD Text support
Blu-ray/DVD-R/DVD+R/DVD-RW/DVD+RW support
CD-R/CD-RW support
Mixed Mode CD (Audio and Data on one disk)
Multisession CD
Video CD/Video DVD authoring
eMovix CD/eMovix DVD
Disk-to-disk CD and DVD copying
Erasing CD-RW/DVD-RW/DVD+RW
ISO image support
Ripping Audio CDs, Video CDs, Video DVDs
K3b can also burn data CDs that support Linux/Unix based OS, Windows, DOS, Very Large Files (UDF), Linux/Unix + Windows, Rock Ridge, and Joliet file systems.
K3b's full list of features (the below list could be still incomplete):
Creating data cds:
Add files and folders to your data cd project via drag'n'drop.
Remove files from your project, move files within your project.
Create empty directories within your project.
Write data cds on-the-fly directly without an image file or with image file. It's also possible to just create the image file and write it to cd later.
Rockridge and Joliet file system support.
Rename files in your project.
Let K3b rename all the mp3/ogg files you add to your project to a common format like "artist - title.mp3".
For advanced users: support for nearly all the mkisofs options.
Verifying the burned data.
Support for multiple El-Torito boot images.
Multisession support
Creating audio cds:
Pluggable audio decoding. Plugins for WAV, MP3, FLAC, and Ogg Vorbis are included.
CD-TEXT support. Will automagically be filled in from tags in audio files.
Write audio cds on-the-fly without decoding audio files to wav before.
Normalize volume levels before writing.
Cut audio tracks at the beginning and the end.
Creating Video CDs:
VCD 1.1, 2.0, SVCD
CD-i support (Version 4)
Creating mixed-mode CDs:
CD-Extra (CD-Plus, Enhanced Audio CD) support.
All data and audio project features.
Creating eMovix CDs
CD Copy
Copy single and multi session data CDs
Copy Audio CDs
Copy Enhanced Audio CDs (CD-Extra)
Copy CD-Text
Add CD-Text from cddb
CD Cloning mode for perfect single session CD copies
DVD burning:
Support for DVD-R(W) and DVD+R(W)
Creating data DVD projects
Creating eMovix DVDs]
Formatting DVD-RWs and DVD+RWs
CD Ripping:
CDDB support via http, cddbp and local cddb directory.
Sophisticated pattern system to automatically organize the ripped tracks in directories and name them according to album, title, artist, and track number.
CD-TEXT reading. May be used instead of CDDB info.
K3b stores CDDB info of the ripped tracks which will automatically be used as CD-TEXT when adding the ripped files to an audio project.
Plugin system to allow encoding to virtually every audio format. Plugins to encode to Ogg Vorbis, Mp3, FLAC, and all formats supported by SoX included.
DVD Ripping and DivX/XviD encoding
Save/load projects.
Blanking of CDR-Ws.
Retrieving Table of contents and cdr information.
Writing existing iso images to CD or DVD with optional verification of the written data.
Writing cue/bin files created for CDRWIN
DVD copy (no video transcoding yet)
Enhanced CD device handling:
Detection of max. writing and reading speed.
Detection of Burnfree and Justlink support.
Good media detection and optional automatic CD-RW and DVD-RW blanking
KParts-Plugin ready.
See also
List of optical disc authoring software
Brasero, a GTK+ optical disc authoring program.
References
External links
K3b website
Bug tracker
Extragear
Free optical disc authoring software
Free software programmed in C++
KDE Applications
Linux CD ripping software
Linux CD/DVD writing software
Optical disc authoring software
Optical disc-related software that uses Qt |
4671403 | https://en.wikipedia.org/wiki/Model-driven%20engineering | Model-driven engineering | Model-driven engineering (MDE) is a software development methodology that focuses on creating and exploiting domain models, which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain, rather than the computing (i.e. algorithmic) concepts.
Overview
The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain).
A modeling paradigm for MDE is considered effective if its models make sense from the point of view of a user that is familiar with the domain, and if they can serve as a basis for implementing systems. The models are developed through extensive communication among product managers, designers, developers and users of the application domain. As the models approach completion, they enable the development of software and systems.
Some of the better known MDE initiatives are:
The Object Management Group (OMG) initiative Model-Driven Architecture (MDA) which is leveraged by several of their standards such as Meta-Object Facility, XMI, CWM, CORBA, Unified Modeling Language (to be more precise, the OMG currently promotes the use of a subset of UML called fUML together with its action language, ALF, for model-driven architecture; a former approach relied on Executable UML and OCL, instead), and QVT.
The Eclipse "eco-system" of programming and modelling tools represented in general terms by the (Eclipse Modeling Framework). This framework allows the creation of tools implementing the MDA standards of the OMG; but, it is also possible to use it to implement other modeling-related tools.
History
The first tools to support MDE were the Computer-Aided Software Engineering (CASE) tools developed in the 1980s. Companies like Integrated Development Environments (IDE - StP), Higher Order Software (now Hamilton Technologies, Inc., HTI), Cadre Technologies, Bachman Information Systems, and Logic Works (BP-Win and ER-Win) were pioneers in the field.
The US government got involved in the modeling definitions creating the IDEF specifications. With several variations of the modeling definitions (see Booch, Rumbaugh, Jacobson, Gane and Sarson, Harel, Shlaer and Mellor, and others) they were eventually joined creating the Unified Modeling Language (UML). Rational Rose, a product for UML implementation, was done by Rational Corporation (Booch) responding automation yield higher levels of abstraction in software development. This abstraction promotes simpler models with a greater focus on problem space. Combined with executable semantics this elevates the total level of automation possible. The Object Management Group (OMG) has developed a set of standards called model-driven architecture (MDA), building a foundation for this advanced architecture-focused approach.
According to Douglas C. Schmidt, model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively.
Tools
Notable software tools for model-driven engineering include:
AADL from Carnegie-Mellon Software Engineering Institute
Acceleo an open source code generator from Obeo
Actifsource
ATLAS Transformation Language or ATL, a model transformation language from Obeo
Eclipse Modeling Framework (EMF)
Enterprise Architect from Sparx Systems
Generic Eclipse Modeling System (GEMS)
GeneXus a Knowledge-based, declarative, multi-platform, multi-language development solution
Genio a CASE / RAD (Rapid Application Development) / Agile / Model Driven Platform developed by Quidgest
Graphical Modeling Framework (GMF)
JetBrains MPS, a metaprogramming system from JetBrains
MagicDraw from No Magic Inc
MERODE JMermaid from KU Leuven (educational)
MetaEdit+ from MetaCase
ModelCenter from Phoenix Integration
Open ModelSphere
OptimalJ from Compuware
PREEvision from Vector Informatik
Rhapsody from IBM
RISE Editor from RISE to Bloome Software
PowerDesigner from SAP
Simulink from MathWorks
Software Ideas Modeler from Dusan Rodina
Sirius an Eclipse Open Source project to create custom graphical modeling workbenches
Together Architect from Borland
Umple from the University of Ottawa
Uniface from Compuware
YAKINDU Statechart Tools open source tool build on top of Eclipse
See also
Application lifecycle management (ALM)
Business Process Model and Notation (BPMN)
Business-driven development (BDD)
Domain-driven design (DDD)
Domain-specific language (DSL)
Domain-specific modeling (DSM)
Domain-specific multimodeling
Language-oriented programming (LOP)
List of Unified Modeling Language tools
Model transformation (e.g. using QVT)
Model-based testing (MBT)
Modeling Maturity Level (MML)
Model-based_systems_engineering (MBSE)
Service-oriented modeling Framework (SOMF)
Software factory (SF)
Story-driven modeling (SDM)
References
Further reading
David S. Frankel, Model Driven Architecture: Applying MDA to Enterprise Computing, John Wiley & Sons,
Marco Brambilla, Jordi Cabot, Manuel Wimmer, Model Driven Software Engineering in Practice, foreword by Richard Soley (OMG Chairman), Morgan & Claypool, USA, 2012, Synthesis Lectures on Software Engineering #1. 182 pages. (paperback), (ebook). http://www.mdse-book.com
External links
Model-Driven Architecture: Vision, Standards And Emerging Technologies at omg.org
Systems engineering
Unified Modeling Language |
34491284 | https://en.wikipedia.org/wiki/Panther%20Software | Panther Software | is a Japanese video game and software company. Founded in 1987 as Panther Studios Ltd., the company changed its name to Panther Software in 1991. They produced video games for the MSX, Sharp X68000, PlayStation, Dreamcast and Xbox.
Video games
Studio Panther
Tenkyuhai, MSX and Sharp X68000 (1989)
Tenkyuhai Special: Tougen no Utage, MSX (1989) and Sharp X68000 (1990)
Kami no Machi, MSX (1989)
Hana no Kiyosato: Pension Story, MSX (1989)
Ooedo Hanjouki, Sharp X68000 (1989)
Tenkyuhai Special: Tougen no Utage 2, MSX (1990)
Tenkyuuhai Special: Tougen no Utage Part 2 - Joshikousei Hen, Sharp X68000 (1990)
Panther Software
Joshua, Sharp X68000 (1992)
Ku2 Front Row, Sharp X68000 (1992)
Ku2, Sharp X68000 (1993)
Hamlet, NEC PC-98 (1993)
Space Griffon VF-9, PlayStation (1995)
Kitchen Panic, PlayStation (1998)
Twins Story: Kimi ni Tsutaetakute, PlayStation (1999)
Aoi Hagane no Kihei: Space Griffon, Dreamcast (1999)
Metal Dungeon, Xbox (2002)
Braveknight, Xbox (2002)
Aoi Namida, Xbox (2004)
Kana: Little Sister, Xbox (Cancelled)
External links
Panther Software at Neoseeker
List of Studio Panther/Panther Software games at GameFAQs
List of Panther Software games at Giant Bomb
Panther Software at MobyGames
Video game companies of Japan
Video game development companies
Video game publishers
Video game companies established in 1987 |
189114 | https://en.wikipedia.org/wiki/Fravia | Fravia | Francesco Vianello (30 August 1952 – 3 May 2009), better known by his nickname Fravia (sometimes +Fravia or Fravia+), was a software reverse engineer, who maintained a web archive of reverse engineering techniques and papers. He also worked on steganography. He taught on subjects such as data mining, anonymity and stalking.
Vianello spoke six languages (including Latin) and had a degree in the history of the early Middle Ages. He was an expert in linguistics-related informatics. For five years he made available a large quantity of material related to reverse engineering through his website, which also hosted the advice of reverse engineering experts, known as reversers, who provided tutorials and essays on how to hack software code as well as advice related to the assembly and disassembly of applications, and software protection reversing.
Vianello's web presence dates from 1995 when he first got involved in research related to reverse code engineering (RCE). In 2000 he changed his focus and concentrated on advanced internet search methods and the reverse engineering of search engine code.
His websites "www.fravia.com" and "www.searchlores.org" contained a large amount of specialised information related to data mining. His website "www.searchlores.org" has been called a "very useful instrument for searching the web", and his "www.fravia.com" site has been described as "required reading for any spy wanting to go beyond simple Google searches."
There are still several mirrors of Fravia's old websites, even though the original domain names are no longer functional. The last mirror of Search Lores linked originally by Fravia directly from his website ("search.lores.eu") went offline in February 2020, but a new mirror came to existence later in 2020 at fravia.net.
As Francesco Vianello
In the 1980s, he was a member of the Esteban Canal chess club in Venice, before moving to CES in Brussels.
Graduated in history at the University of Venice in 1994, Vianello had a master's degree in history and philosophy in 1999. He was interested in studying the arts and humanities and was aiming to build a collective knowledge on the particular subject of learning and Web-searching as an art.
He spoke six languages (including Latin). Fravia participated as a speaker in the 22nd Chaos Communication Congress. His lecture was on the subject of Hacking.
As Fravia
Vianello was focusing on privacy and created the myth of Fjalar Ravia (aka fravia+, msre, Spini, Red Avenger, ~S~ Sustrugiel, Pellet, Ravia F.) as protection from hostile seekers.
At least two distinct phases of his internet public work can be identified.
The first, from 1995 (starting date of his internet presence) to 1999 was related to software reversing, software protection, decompiling, disassembling, and deep software code deconstruction. At those times the WDasm disassembler by Eric Grass, which also included a debugger, was a popular download.
The second, starting in 2000, where the first stage left off, was focused on an (apparently) entirely different field: Internet Knowledge search. In February 2001, Vianello made a conference at the École Polytechnique in Paris about "The art of information searching on today's Internet". He also presented his work "Wizard searching: reversing the commercial Web for fun and knowledge" at REcon 2005.
First Period: Reverse Engineering ("Reality Cracking")
In the first period Vianello focused on reverse-engineering software protection, content copyright, and software patents. The steps for cracking software protection were in many cases published on his website, in the form of essays and Old Red Cracker's lessons.
Vianello asked the community to remove from the web every copy of his old site (www.fravia.org - now a spam advertisement website), corresponding to this period, because "The idea was to convert young crackers [...] The experiment worked only in part, hence the decision a couple of years ago to freeze that site". Nevertheless, some mirrors still exist. The site has been described as containing "useful tools and products".
According to the 2001 ACM Multimedia Workshops of the Association for Computing Machinery, Vianello's website contained information which could assist hackers of a certain classification who were not skilled enough "to mount a new or novel attack". His website also analysed brute force attacks on steganography.
This period included papers related to reality-cracking, i.e. the capacity of the seeker to decode the hidden facts behind appearance.
Reverse engineering a legitimately bought program and studying or modifying its code for knowledge was claimed as legal by Vianello at least in the European Union under some restricted conditions.
Second Period: Web Searching ("Search Lores")
The transition between the two phases occurred after realizing the growing importance of Internet search engines as tools to access information. According to his vision, access to information should not be restricted, and he was advocating for a true openness of web information contents. He strongly criticized the large amount of advertising on the Internet, which he considered as promoting unnecessary products to a population of naive consumers.
Richard Stallman, in his web article "Ubuntu Spyware: What to do?", mentions that it was Vianello who alerted him to the fact that performing a file search on a computer running Microsoft Windows would cause it to send a network packet to an Internet server, which was then detected by the firewall in Vianello's computer.
In the second stage of his work, Vianello explained how the content is currently structured on the world wide web and the difficulties of finding relevant information through search engines because of the growing number of ads, that search engines promote today.
In 2005, Vianello was the keynote speaker at the T2 infosec conference. The subject of his speech was: "The Web - Bottomless Cornucopia and Immense Garbage Dump".
+HCU
Vianello was a member of the so-called High Cracking University (+HCU), founded by Old Red Cracker to advance research into Reverse Code Engineering (RCE). The addition of the "+" sign in front of the nickname of a reverser signified membership in the +HCU.
+HCU published a new reverse engineering problem annually and a small number of respondents with the best replies qualified for an undergraduate position at the "university". Vianello's website was known as "+Fravia's Pages of Reverse Engineering" and he used it to challenge programmers as well as the wider society to "reverse engineer" the "brainwashing of a corrupt and rampant materialism". In its heyday, his website received millions of visitors per year and its influence was described as "widespread".
Nowadays most of the graduates of +HCU have migrated to Linux and few have remained as Windows reversers. The information at the university has been rediscovered by a new generation of researchers and practitioners of RCE who have started new research projects in the field.
Legacy
Vianello has been described as an inspiration for many hackers and reversers, a friend of the founder of the CCC Wau Holland, and a motivation for Jon Lech Johansen to understand the inner workings of computer programs. Johansen commented in a blog post that Fravia's site was a goldmine during his education as a reverse engineer. In his later years, he moved from software reversing to free software and searching the web further. His website has been described as the meeting point of the people who wanted to search the web deeper still.
In September 2008, Vianello stopped updating his site and holding conferences, after being diagnosed with and receiving treatment for squamous cell carcinoma of the tonsil, which metastasized. His site was frozen for several months but was updated again on 9 March 2009 while he was slowly recovering and focusing on Linux. He died suddenly on Sunday, 3 May 2009 at the age of 56.
Published works
Francesco Vianello, Gli Unruochingi e la famiglia di Beggo conte di Parigi. (ricerche sull'alta aristocrazia carolingia) // Bollettino dell'Istituto storico italiano per il Medioevo 91 (1984).
Francesco Vianello, Università di Padova, I mercanti di Chiavenna in età moderna visti dalla Terraferma veneta.
Francesco Vianello,
Fravia (ed.) Annotation and exegesis of Origo Gentis Langobardorum.
Notes and references
External links
An archive of Fravia's Searchlores (no longer updated)
Fravia's website (European mirror)
Fravia's Real Identity (European mirror)
Fravia's fake auto-biography (European mirror)
Fravia's farewell (April 2009) (European mirror)
Fravia at ccc congress 2005
Fravia at ccc congress 2002
Last known mirror of the original "reversing site"
Video of a Fravia conference presentation at Recon 2005 in Montreal
Video of a Fravia conference presentation at Recon 2006 in Montreal
I have seen the ICE age, by Malay
+Greythorne's Privacy Nexus (Fravia's Partner +gthorne)
Iczelion's Win32 Assembly Homepage
1952 births
2009 deaths
Deaths from cancer in Belgium
Italian computer programmers
Italian historians
Computer security specialists
20th-century historians |
24288226 | https://en.wikipedia.org/wiki/Uzbl | Uzbl | Uzbl is a discontinued free and open-source minimalist web browser designed for simplicity and adherence to the Unix philosophy. Development began in early 2009 and is still considered in alpha software by the developers. The core component of Uzbl is written in C, but other languages are also used, most notably Python. All parts of the Uzbl project are released as free software under GNU GPL-3.0-only.
The name comes from the word usable, spelled in lol speak.
Development of Uzbl is still in alpha stage. Uzbl was originally designed for Arch Linux, but operates with other Linux distributions and BSD systems. Compilation guides are available for Gentoo Linux, Ubuntu, MacPorts, and Nix package manager. The project is currently "abandoned" due to lack of time.
Despite being in early stages of development, Uzbl has gained prominence as a minimalist browser. As of 2019, further development for the project is discontinued.
Design
Uzbl follows the Unix philosophy, “Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.” As a result, Uzbl does not contain many of the features of other browsers. Uzbl has none of its own tool bars or graphical control elements, and does not manage bookmarks, history, downloads, or cookies, leaving them to be handled by external programs or scripts. These scripts are typically user-written, although some are available for download like uzbl_tabbed for tabbed browsing support. For interaction it can read input from standard input (FIFO pipe) or from POSIX local IPC socket or it can be passed text files such as a configuration file. This design is intentional, allowing for more customization.
Features
Uzbl uses the WebKit layout engine, and therefore inherits support for many web standards, including HTML, XML, XPath, Cascading Style Sheets, ECMAScript (JavaScript), DOM, and SVG, passing the Acid3 browser test. Web kit supports Netscape-style plugins such as Adobe Flash Player and MPlayer.
Uzbl's design focuses on keyboard control and hot keys, although it also supports mouse and other pointing device input. Like the pentadactyl and vimperator Firefox extensions, Uzbl employs a mode-based interface derived from the vi and Vim text editors. Rather than move the cursor to an address bar or a link, a user presses a hotkey to switch to "command" mode. From this mode the user may: select links in the view port through assigned keys (0 through 9 by default) or through typing an unambiguous string of the link text; navigate to another web page by typing its URL; modify settings; and perform other normal web-browsing tasks. While this mode-based interface creates an initially steep learning curve, once learned it typically allows a user greater speed and convenience than many other browsers. Uzbl allows configuration of the hot keys used.
History
The idea of creating a new web browser started in spring 2009 on the internet forums of Arch Linux. Dieter Plaetinck started the development of the browser and was then supported by other developers. The first code was published on April 21, 2009. The product was usable after only two months of development . Besides compilation guides for a series of Linux distributions and Mac OS X/Darwin (Mac Ports) several pre-compiled binaries are available, although officially there is not yet a version marked as stable. On September 21, 2009, Uzbl was accepted into the Debian operating system and was migrated to its testing branch on October 2, 2009.
References
External links
slash-dot story
FOSDEM Talk Video and Slides
Free software programmed in C
Free web browsers
POSIX web browsers
Software based on WebKit
Web browsers that use GTK
Discontinued web browsers |
6849466 | https://en.wikipedia.org/wiki/Willy%20Susilo | Willy Susilo | Willy Susilo () is an Australian cybersecurity scientist and cryptographer. He is a Distinguished Professor at the School of Computing and Information Technology, Faculty of Engineering and Information Sciences University of Wollongong, Australia.
Willy Susilo is the fellow of IEEE, IET, ACS, and AAIA. He is the director of Institute of Cybersecurity and Cryptology, School of Computing and Information Technology, University of Wollongong. Willy is an innovative educator and researcher. Currently, he is the Head of School of Computing and Information Technology at UOW (2015 - now). Prior to this role, he was awarded the prestigious Australian Research Council Future Fellowship in 2009. He was the former Head of School of Computer Science and Software Engineering (2009 - 2010) and the Deputy Director of ICT Research Institute at UOW (2006 - 2008).
He is currently serving as the Associate Editor of IEEE Transactions on Dependable and Secure Computing (TDSC) and has served an Associate Editor of IEEE Transactions on Information Forensics and Security (TIFS). He is the Editor-in-Chief of the Elsevier Computer Standards and Interface and the Information journals. His research interest is cybersecurity and cryptography.
Willy obtained his PhD from the University of Wollongong in 2001. He has published more than 400 papers in journals and conference proceedings in cryptography and network security. He has served as the program committee member of several international conferences.
In 2016, he was awarded the "Researcher of the Year" at UOW, due to his research excellence and contributions. His work on the creation of short signature schemes has been well cited and it is part of the IETF draft.
Biography
Willy received his Bachelor degree from the Faculty of Engineering at Universitas Surabaya, Indonesia. His degree should have taken 4.5 years, but he completed it within 3.5 years with a Summa Cum Laude predicate in 1993. He went to the University of Wollongong, Australia, to pursue his Master's and Ph.D. degrees. He was awarded a Ph.D. degree in 2001 from the University of Wollongong, Australia.
Research
Willy Susilo's research is in the area of cybersecurity and cryptography. His primary research focus is to design solutions and cryptographic algorithms to contribute towards securing the cyberspace.
Publications and awards
Distinguished Professor Willy Susilo is author and co-author of over 400 research papers. His work in cryptography, computer-security, information-technology, cyber-security, and network-security.
2021, IET Fellow
2021, IEEE Fellow
2021, ACS Fellow
2021, AAIA Fellow
2020, "Vice Chancellor's Global Strategy Award"
2019, "Vice-Chancellor's Award For Research Supervision"
2016, "Vice Chancellor's Research Excellence Award for Researcher of the Year"
Books
F. Guo, W. Susilo and Y. Mu. Introduction to Security Reduction. Springer, 2018.
K.C. Li, X. Chen, and W. Susilo. Advances in Cyber Security: Principles, Techniques, and Applications. Springer, 2019.
X. Chen, W. Susilo, and E. Bertino. Cyber Security Meets Machine Learning. Springer, 2021.
Professional Services
Willy Susilo is the Editor-in-Chief of two journals. He is also General and PC Co-chair of more than 20 different international conferences in cryptography and cybersecurity.
Editor-in-Chief
Computers Standard and Interfaces (Elsevier)
Information journal (MDPI)
Professional Membership
Fellow of IET.
Fellow of IEEE.
Fellow of Asia-Pacific Artificial Intelligence Association (AAIA).
Fellow of the Australian Computer Society (ACS).
Member of the International Cryptographic Association Research (IACR).
References
External links
University of Wollongong personal homepage
DBLP
Google Scholar: Willy Susilo
Australian computer scientists
Living people
University of Wollongong faculty
Year of birth missing (living people) |
581509 | https://en.wikipedia.org/wiki/List%20of%20fictional%20diseases | List of fictional diseases | This article is a list of fictional diseases, disorders, infections, and pathogens which appear in fiction where they have a major plot or thematic importance. They may be fictional psychological disorders, magical, from mythological or fantasy settings, have evolved naturally, been genetically modified (most often created as biological weapons), or be any illness that came forth from the (ab)use of technology.
Items in this list are followed by a brief description of symptoms and other details.
In comics and literature
{| class="wikitable sortable"
!width=200pt|Name
!width=200pt|Source
!Symptoms
|-
|AMPS - Acquired Metastructural Pediculosis
|Pontypool Changes Everythingby Tony Burgess
|A "metaphysical, deconstructionist" virus spread by the English language. Symptoms begin with Palilalia as they repeat certain words (usually terms of endearment), proceeding to full Aphasia and finally cannibalistic rage as the victim falls to insanity from an inability to express themselves clearly.
|-
|Andromeda
|The Andromeda Strainby Michael Crichton
|A rapidly mutating alien pathogen that (in its most virulent form) causes near-instantaneous blood-clotting.
|-
|ARIA – Alien Retrograde Infectious Amnesia
|The Aria Trilogyby Geoff Nelder
|A plague accidentally contracted from an "alien suitcase". Symptoms appear to be non-specific fever-like symptoms and retrograde amnesia.
|-
|Atlantis Complex
|Artemis Fowl: The Atlantis Complexby Eoin Colfer
|A psychosis common in guilt ridden fairies, but is contracted by Artemis by his dabbling in fairy magic. The symptoms include obsessive compulsive behavior, paranoia, multiple personality disorder, and in his case professing his love to Holly Short.
|-
|Bazi Plague
|The Gor Seriesby John Norman
|Bazi plague is a deadly, rapidly spreading disease with no known cure. Its symptoms include pustules that appear all over the body, and a yellowing of the whites of the eyes.
|-
|Black Trump Virus
|Wild Cards by George R. R. MartinWild Cards by George R. R. Martin
|The Black Trump virus is a variant of Xenovirus Takis-B. Rather than a cure, this retrovirus was designed to kill aces, jokers, latents, and wild card carriers. Dr. Tachyon's original Trump virus was designed to turn wild carders back into nats (a slang term for naturals), those who do not carry Xenovirus Takis-A in their system.
|-
|Bloodfire
|Blood Nation
|A virus that gestated in wolves two thousand years ago. The first to be infected was Genghis Khan. It causes the symptoms usually associated with vampirism, photosensitivity and invincibility. The entire nation of Russia is infected, except for a few feral children. The virus can cause extreme mutation, for example the snake's tail present in the Khan's head scientist.
|-
|Brainpox "Cobra"
|The Cobra Event
|A genetically engineered recombinant virus made from the nuclear polyhedrosis virus, the rhinovirus, and smallpox. It causes nightmares, fever, chills, runny nose, encephalitis (brain swelling), and herpes-like boils in the mouth and genitals, followed by a short period of aggression and autocannibalism preceding death. Used as a bioterror weapon.
|-
|Buscard's Murrain a.k.a. Wormword
|"Entry Taken from a Medical Encyclopaedia" by China Miéville
|An echolalia-like disease in which a specific pronunciation of a certain word—the "wormword"—leads to fatally degenerative cognitive ability as a result of an encephalopathy. Buscard's Murrain is infectious, as the afflicted desire to hear others pronounce the wormword.
|-
|Captain Trips ("Superflu", "tube neck", and "project blue")
|StandThe Standby Stephen King
|A deadly, flu-based virus. Created as a biological weapon codenamed Blue. Causes a lethally high fever and is highly contagious. It is deadly because as the body fights off the disease, it mutates into different strains of influenza, making immunity next to impossible.
|-
|Chivrel
|Dray Prescot seriesby Kenneth Bulmer
|Victims suffer premature extreme aging.
|-
|Clone-Killing Nanovirus
|Star Wars Republic Commando: Hard Contactby Karen Traviss
|A nanovirus developed by the Confederacy of Independent Systems designed specifically to kill the clones of Jango Fett. Its creator, Ovolot Qail Uthan, is captured by Republic Commandos before her research is complete, however. In later books in the series, it is revealed (though not to any of the main characters, but to the reader through both Palpatine's and Dr. Uthan's private journals), Chancellor Palpatine secretly chooses not to completely destroy all evidence or research of the virus, but rather opts to hold onto it as a back-up plan, should the clone army ever be turned against him.
|-
|Collins' Syndrome
|The Legend of Deathwalker by David Gemmell
|A mutating disease that often starts with pain and sensitivity in the victims nipples, then forms a temporary tumor in the brain as it feeds upon the genetic material of the brain cells, sapping away the victims critical thinking skills and intelligence, once it reaches its critical density, the tumor disbands into the bloodstream, the virus going into a form of hibernation, leaving its victim in a state of near absolute uselessness. Once the virus detects that it has entered a new host due to differences in protein markers of the victims cells, the process begins again.
|-
|Gray brittle death
|The Colour Out of Space by H. P. Lovecraft
|A disease caused by infection with an alien entity called "The colour" by characters in the story, the disease affects anything living, including plants, insects, livestock, wild animals, and humans. Symptoms in plants include either stunting or growing abnormally large with much tasteless fruit and growing abnormally-shaped flowers and leaves followed by glowing in the dark with an indescribable color and finally losing their leaves and crumbling to gray dust. Insects become strangely bloated and oddly shaped before crumbling into grey dust. Livestock such as hogs grow abnormally large with tasteless meat before wasting away and crumbling to grey dust, while cattle and horses exhibit strange behavior followed by crumbling into a grey powder. In some of the wild animals, the disease causes animals to leave "strange footprints in the snow" that are recognizable as known animals but are off in anatomy and behavior, and rabbits have abnormally long strides. In humans, the disease causes its victims to slowly go insane and see things that are not there, talk incoherently, suffer memory loss followed by walking on all fours. The victim then begins glowing in the dark with an indescribable color and becoming increasingly weak and thin before crumbling to grey dust.
Human victims describe "being drained of something" or "having the life sucked out".
|-
|Cooties
|Various
|A term used by children in the United States, with varied meaning. "Cooties" generally refers to an invisible germ, bug, or microscopic monster, transferred by skin to skin contact, usually with a member of the opposite sex.
|-
|Coreopsis
|The Secret Life of Walter Mitty by James Thurber
|Used by surgeon Dr. Renshaw, presumably referring to some complication of the critical surgery in progress in the second of Mitty's fantasies in the 1939 story. '“Coreopsis has set in,” said Renshaw nervously. “If you would take over, Mitty?”'. Coreopsis is actually the name of a genus of flowering plants native to North, Central, and South America.
|-
| Curse of the Warmbloods
| The Underland Chronicles by Suzanne Collins
| A disease created by Doctor Neveeve in the city of Regalia. She gave the disease to fleas, which instead of getting infected, spread the disease around warm-blooded creatures, including people. Symptoms include purple blemishes, coughing, choking, and a swelled tongue. The cure was originally believed to be a plant named starshade, though the true cure was made in Regalia.
|-
|Dar-Kosis
|Gorby John Norman
|Dar-kosis is a virulent, horrible, wasting disease and is similar in many ways to leprosy. It is taught by the Initiates (who claim to be the voice of the Priest-Kings of Gor) that Dar-Kosis is a holy disease.
|-
|Death Stench
|Gyo by Junji Ito
|A virus designed by the Imperial Japanese Army during WWII, it was designed to be paired up with mechanical walking machines to carry infected hosts further towards enemies to be sickened. The Death Stench was let loose on Japan when the ship carrying the prototypes was destroyed by allied aircraft; the virus then began multiplying, synthesising new walking machines by harvesting iron from shipwrecks until the present day, when large quantities of infected sea life began invading the Kanto region. The Death Stench disease causes its hosts - which can range from fish to humans and other large mammals - to visibly bloat, and begin producing large quantities of gas containing the virus; when attached to a walking machine, this gas powers the machine's legs, which will remain active until its victim decays away and is no longer able to produce enough gas to make the machine move. It appears that the virus is airborne, although it can also be contracted via being attached to a vacant walking machine; amputating a limb that has become attached to a smaller walking machine is the only way to escape, and even then the machine will still use the limb as a 'power source'.
|-
|Demon Pox
|The Infernal Devicesby Cassandra Clare
|Demon pox, also known as astriola, is a rare but debilitating disease that affects Shadowhunters and is caused by sexual contact with demons.
Mundanes are immune to the disease, as demon pox is assumed to be caused by the interaction of demon poisons with the angelic nature of Shadowhunters.
|-
|Descolada
|Speaker for the Deadby Orson Scott Card
|A quasi-conscious self-modifying organism capable of infecting any form of life. "Descolada" is also the Portuguese word for "unglued". In the context of the book, this refers to the Descolada virus's effects: it breaks the link of the DNA double helix (ungluing it) and induces mutations.
|-
|Despotellis
|Green Lantern Corps| A sentient virus and a member of the Sinestro Corps. It could create non-sentient duplicates of itself creating a plague capable of killing infected victims within minutes, and can also destroy these duplicates, leaving no trace of their presence. Among the victims of its plague was Kyle Rayner's mother.
|-
|Devotion
|Zombiecorns by John Green
|A disease caused by the genetically-engineered corn strain d131y which turns the victim into a mindless zombie-like creatures called "Z"s. Called Devotion because its victims only want to plant d131y and convert all humans to Zs to further the spread of the corn.
|-
|Diseasemaker's Croup
|Fragile Things by Neil Gaiman
|A disorder 'afflicting those who habitually and pathologically catalogue and construct diseases.' It is characterized by increasingly nonsensical speech and writing patterns and an obsessive insistence on trying to repeat previous statements out of context.
|-
|Dragon Pox
|The Harry Potter Seriesby J.K. Rowling
|Dragon pox is a potentially fatal contagious disease that occurs in wizards and witches. Its symptoms are presumably similar to Muggle illnesses like smallpox and chicken pox. However, in addition to leaving the victim's skin pockmarked, dragon pox causes a lasting greenish tinge. Simpler cases present with a green-and-purple rash between the toes and sparks coming out of the nostrils when the patient sneezes. Elderly patients are apparently more susceptible to dragon pox than younger ones. Gunhilda of Gorsemoor developed a cure for dragon pox, but the disease has not been completely eradicated, as is evidenced by the fact that it is still treated by the Magical Bugs ward at St. Mungo's Hospital.
|-
|Dryditch Fever
|Salamandastron by Brian Jacques
|A deadly disease causing weakness, hot flashes, chills, and dizziness. The victim is usually bedridden until eventual death. The only known cure are Flowers of Icetor boiled in spring water.
|-
|DX
|The Lost Worldby Michael Crichton
|An unknown prion dubbed "DX" by scientists on Isla Sorna. It is similar to mad cow disease and was the result of feeding ground-up sheep to carnivorous dinosaurs. DX increased the mortality rate of newborn dinosaurs and is eventually fatal to adult dinosaurs. In order to combat DX, InGen scientists released animals into the wild of Isla Sorna. The prion initially infected carnivorous dinosaurs such as velociraptors and procompsognathus, which would then spread the disease to herbivores such as apatosaurus, and the apatosaur carcasses would be eaten by compys, which would then spread the disease to other carcasses, and the cycle would repeat. Ian Malcolm said at the end of the novel that, because of the imbalance of carnivores and herbivores due to DX, the dinosaurs were doomed to die out.
|-
|Ebola Gulf A
|DC Comics
|Also known as "the Clench", due to the victims clenching their stomachs, Ebola Gulf is an evolved form of the ebola virus created by the terrorist mastermind Ra's al Ghul after he consulted the Wheel of Plagues.|- id="Fire-Us"
|Fire-Us (Sounds like "Virus")
|Fire-Us series
|A viral infection that infects extremely fast and only infects those that produce sex hormones (i.e. those after puberty and women before menopause) or are taking medicine that includes similar hormones. It was released by the President of the United States of America to start the world over, killing almost all adults within 2 weeks. As a result, children were left to fend for themselves, most of whom failed. Once all the targets of the virus were gone, it died out.
|-
|The Flare (virus VC321xb47)
|The Maze Runner|A highly contagious virus that infects the brain of its host, turning them into crazed blood-thirsty cannibals (essentially zombies) that are called Cranks. Less than 1% of the population is immune to the virus, and are called Munies. There is no cure for The Flare, but many wealthy people slow down the onslaught of the symptoms with an illegal drug called The Bliss, which slows down their brain activity. It was released by the governments of the world to help control overpopulation after the Sun Flares, but it eventually killed most of the people in the world.
|-
|Foul-Drought
|Heir of Mistmantle by M. I. McAllisterThe Heir of Mistmantle by M. I. McAllister
|Disease caused from drinking poisoned water. Animals who have it will have pain, blurry sight and some will eventually die.
|-
|Georgia Flu
|Station Eleven by Emily St. John Mandel
|A variant of the flu that kills nearly all humans on earth, with an incubation periods of only a few hours.
|-
|Goddag-goddagsjukan (Good Day, Good Day Disease)
|Sagan om Sune by Anders Jacobsson and Sören Olsson
|A disease lasting for a few hours, where the affected person can only say "Good Day, Good Day" despite attempts to say other words. Sune gets affected but later ends up cured by his primary schoolteacher Ulla-Lena Frid, who cures it with "ordinary simple curiosity" (Swedish: "vanlig enkel nyfikenhet").
|-
|Gray DeathThe Gray Death
|Gail Carson Levine's The Two Princesses of Bamarre|Disease created from the noxious gas from the defeated dragon Yune's stomach. It comes on with no warning and is not contagious. There are three stages of the disease. The first stage is the weakness, and it can last anywhere from a week to six months. The second stage is the sleeping, and it always lasts nine days. The last stage is fever, and it always lasts three days. At the end of the fever stage, the victim will die. The only cure is water sent down from the fairies' Mount Ziriat. The cure will only be discovered when cowards find courage and rain falls over all Bamarre.|-
|Great Plague
|The Lord of the Rings by J.R.R. Tolkien
|A mysterious disease that swept down through every single kingdom of Middle-earth during the mid-1600s of the Third Age of the Sun. The Plague's origins are unknown except it was possibly contracted from the Corsairs who attacked Gondor in Third Age 1634, two years before the Plague occurred. The Plague was 90% fatal for nearly all inhabitants of Middle-earth, especially in Gondor and the North. It is based on the Black Death.
|-
|Greyscale
|A Song of Ice and Fire|Greyscale is a typically nonfatal disease akin to leprosy. It is first introduced in Stannis Baratheon's daughter Shireen. When it infects children, greyscale generally leaves children malformed and disabled but alive. However, in A Dance with Dragons, it is revealed to be generally fatal to adults. The disease is contracted by touch and slowly turns the skin (small patches in children and the entire body in adults) of the victim to into a gray, stone-like form. It is said that the disease also drives its adult victims insane.
|-
|Hanahaki Disease, or Hanahaki Byou
|Japanese, Korean, and Chinese pop band, anime, and manga fandoms
|Hanahaki Disease (花吐き病 (Japanese); 하나하키병 (Korean); 花吐病 (Chinese)) is a fictional disease where the victim of unrequited or one-sided love begins to vomit or cough up the petals and flowers of a flowering plant growing in their lungs, which will eventually grow large enough to render breathing impossible if left untreated. There is no set time for how long this disease lasts but it may last from 2 weeks to 3 months, in rare cases up to 18 months, until the victim dies unless the feelings are returned or the plants are surgically removed. There is also no set flower that blossoms in the lungs but it may be the enamoured’s favourite flower or favourite colour. Hanahaki can be cured through surgical removal of the plants' roots, but this excision also has the effect of removing the patient's capacity for romantic love. It may also erase the patient’s feelings for and memories of the enamoured. It can also be cured by the reciprocation of the victim's feelings. These feelings cannot be feelings of friendship but must be feelings of genuine love. The victim may also develop Hanahaki Disease if they believe the love to be one-sided but once the enamoured returns the feelings, they will be cured. In some literature other symptoms can be fever, uncontrollable shaking, loss of appetite, low body temperature, and hallucinations. Even after curing, with or without surgery, there can be irreversible damage to the lungs and, although very rare, in some cases the disease cannot be cured.
|-
|Hawaiian Cat Flu
|Garfield by Jim Davis
|A rare disease only contracted by cats. Its symptoms include a "voracious" appetite, a craving for Hawaiian food, listlessness, crankiness, and a compulsion to wear Hawaiian shirts and hula dance.
|-
|Herod's Flu (SHEVA)
|Darwin's Radio by Greg Bear
|A contagious, sexually transmitted human e hindogenous retrovirus (HERV) that causes flu-like symptoms and ultimately causes miscarriage of pregnancies. Though treated as a public health crisis by the Centers for Disease Control and Prevention and World Health Organization, the virus is later revealed to be a mechanism that causes rapid speciation and accelerates evolution.
|-
|Harlequin (not to be confused with Harlequin Ichthyosis, a severe genetic disorder)
|Harlequin Rex by Owen Marshall
|A progressive and fatal neurological disease that causes a re-awakening of primordial senses and behaviors, set in near-future Earth.
|-
|Hourman Virus
|DC One Million|Created by the living star Solaris, this plague was caused by nanomachines. It acted like both a biological virus and a computer virus, and could be spread to each type of victim by the other type. It was capable of wiping out humanity in twenty-four hours.
|-
|Idiopathic Adolescent Acute Neurodegeneration (IAAN)
|The Darkest Minds Trilogy by Alexandra Bracken
|Idiopathic Adolescent Acute Neurodegeneration (IAAN), also known as Everhart's disease after its first victim, is a fatal disease affecting children between the ages 8–14. IAAN is known to not have any specific symptoms, with the only real symptom being death without warning. The 2% that survived IAAN were given powers.
|-
|I-Pollen Degenerative Disorder
|DC Comics
Transmetropolitan
|The hero Spider Jerusalem has I-Pollen Degenerative Disorder, a disease he gained as a result of coming into contact with Information Pollen, pollen used to transmit information. In 98% of the cases, the disease will cause the victim to lose all motor and cognitive skills. It is comparable to Alzheimer's and Parkinson's disease.
|-
|Inferno virus
|Infernoby Dan Brown
|An airborne virus that incubated in water. It was released by the terrorist group the Consortium to kill off half of humanity and reproduce with only a third of ten individuals who were immune. The virus was modeled on the Black Death. Originally, its creator, Bertrand Zobrist, planned to have it as a waterborne virus, but changed it to airborne because it could infect faster. The Inferno virus can infect a human through damp air, and then it renders humans infertile. The plan was for the infected to die off and humanity to be rendered extinct.
|-
|Kellis-Amberlee
|Feed by Mira Grant
|A spontaneous combination of two man-made viruses that exists in a 'reservoir condition' state without ill effects until the host's death, when any host over approximately 40 pounds undergoes virus amplification and becomes a zombie.
|-
|Konebogetvirus
|The Next Big One by Derek Des Anges
|A long-latency manmade virus which since its creation has mutated multiple times. The virus is modelled on lyssavirus, ebola, rabies, and several other real-world viruses. A notable symptom is the alteration of an infected person's behaviour to increase the likelihood of transmission to others, comparable to toxoplasmosis in mice.
|-
|Krytos virus
|Star Wars Expanded Universe"Star Wars expanded universe"
|The Krytos virus was a deadly and highly contagious virus that only attacked non-human species. It could spread via a number of avenues, including by water supplies and by air. The virus often killed its host in less than two weeks, resulting in a painful death.
|-
|Legacy Virus
|Marvel Multiverse
|A disease that targets only mutants, causing genetic and biological degradation and eventual death; shortly before death, the virus' effects will cause a violent, uncontrolled flare-up of the victim's superhuman abilities. One strain of it can also infect humans, as it did to Moria McTaggart.
|-
| Letumosis
| The Lunar Chronicles series by Marissa Meyer
| Also known as the "Blue Fever", a worldwide pandemic that is compared to the plague. Multiple stages. Carriers are noted to show boils and patches on their skin.
|-
|Life-Eater Virus
|Warhammer 40,000 novels
|The Life-eater virus is a form of necrotizing fasciitis that causes all biological matter to break down into its component parts, releasing toxic, flammable gas that can be ignited with a single explosion. The virus eats itself when there is nothing else to attack. It is quite effective against Tyrranids. In the short-story anthology Planetkill, an updated strain goes after the soul, turning the population into zombies, created by a Techpriest inhabited by a daemonic Unclean One.
|-
|Love Sickness
|One Piece|A mostly psychosomatic disease that can only be contracted by the empress of the Kuja Tribe if she falls in love with a man and denies the feeling. It causes weakness, pain, and eventually death from declining health. The only known cure is for the victim to accept the emotions and pursue the object of her desire. This disease has killed many previous empresses, and is currently a threat to Boa Hancock, who pursues Monkey D. Luffy to avoid the symptoms.
|-
|Leezle Pon
|Green Lantern Corps|A super-evolved smallpox virus with intelligence and sentience; it is a member of the Green Lantern Corps that played a pivotal role in defeating Despotellis.
|-
|Lycanthropy
| Various
| The general term for the condition that causes a human to transform into a werewolf. Regarded as a curse or the result of evil magic in folklore, it is often regarded as an infectious disease spread by other werewolves in modern werewolf fiction.
|-
|Maternal Death Syndrome (MDS)
|Testament of Jessie Lamb by Jane RogersThe Testament of Jessie Lambby Jane Rogers
|Latent in everyone and triggered upon pregnancy, it causes rapid progressive brain degeneration and is invariably fatal to both mother and child. Possibly a strain of JC virus.
|-
|Neurodermatitis
|Dark Benediction (1951)by Walter Miller Jr.
|A pathogen causing rapid nervous system evolution and development of new sensory organs, which causes synesthetic psychosis in unprepared hosts. Sent to Earth by an alien race living in symbiosis with it, in the hopes of furthering other races' advance. Designed for controlled delivery, it is turned into a plague by a curious retriever's cutting the vessel with a hacksaw.
|-
|Pale Mare (also known as the bloody flux)
|A Song of Ice and Fire|This is a cholera-like disease transmitted through water. It causes diarrhea and intestinal bleeding, which soon lead to death. It is common during wars.
|-
|Plague of Insomnia
| One Hundred Years of Solitudeby Gabriel García Márquez
|An epidemic brought into the Buendía household and the town of Macondo by Rebeca; the adopted daughter of José Arcadio Buendía and Úrsula Iguarán. This plague, originally coming from the northern Indian kingdoms in La Guajira (Colombia), is identified by the symptoms of wide-open, glowing eyes like those of a cat, and the impossibility of sleeping. Those infected (in the novel consisting of the entire town of Macondo) feel no tiredness or sleepiness whatsoever and hence can work all day and night. However, as time advances, those infected begin to lose all their memories and knowledge of the world; ultimately leaving them in a state in which they have forgotten the names and uses of all things and their own identities. The plague is generally seen as one of the most prominent demonstrations of magical realism in García Márquez's literary works.
|-
|The Pulse
|Cell by Stephen King
|A powerful virus that lies dormant inside mobile phones and which requires a powerful signal to set off. The exact unleashers are unknown, but are implied to be a terrorist group due to numerous theories in the novel. The virus is implied to have been released just after September 11th, and lain dormant in cell phones ever since. Once the right signal is transmitted and leaked into incoming phone calls, the caller's brain cells immediately disintegrate and they are unable to recognize friend from foe; they are even unable to recognize other people infected with the virus. Inevitably, the infected callers become psychotic and start killing each other, the chaos of which lasts approximately two days before the infected callers have become "stable" enough to cooperate and recognize each other.
|-
|Queen’s Lady Plague
|Six of Crows Duology by Leigh Bardugo
|The Queen's Lady Plague refers to an outbreak of firepox in Ketterdam about seven years before the events of Six of Crows. It was named after a ship, the Queen's Lady, which was believed to have brought the disease to the city. When an outbreak occurred, the plague sirens sounded to signal all citizens to return to their homes, and the officers of the stadwatch to report to their designated stations around the city. Only the sickboats, bodymen, and mediks were allowed to move freely about the city during an outbreak.
|-
|Raison Strain
|Books of History Chronicles by Ted Dekker
|Originally Starting off as a vaccine created by Monique Raison, it was mutated into a deadly virus that succeeded in killing off most of humanity. In the future, its counterpart was the Horde disease.
|-
|Ratititis
|Roald Dahl's Boy
|A fictional disease invented by Roald Dahl's friend Thwaites during their schooldays in Llandaff. Thwaites made this up to amuse Roald and the other friends, but he says his dad told him about the disease, which is apparently contracted from eating liquorice bootlaces. Thwaites says that the bootlaces actually have rat's blood rather than licquorice, and they are done this way by rat-catchers bringing their rats to the sweet factory where they pound the rats into a paste, then mash it up to form licquorice bootlaces. Thwaites told Roald and his friends never to eat them, because if they did, a rat's tail would burst out of their buttocks and their teeth would turn into fangs. Only Roald and his friends saw the joke; Thwaites took it with deadpan humour.
|-
|Red DeathThe Red Death
|Masque of the Red Death by Edgar Allan Poe"The Masque of the Red Death"by Edgar Allan Poe
|Victims bleed from their pores before eventually dying. Most likely a viral hemorrhagic fever.
|-
|RipleyThe Ripley
|Dreamcatcherby Stephen King
|An alien parasitoid macrovirus. The adult aliens resemble deformed potato beings with legs, while the younger aliens—nicknamed "shit-weasels" because they can be created in a host organism's stomach and escape by eating their host's body between the stomach and anus– are legless, smaller versions of the adult alien. Both adult and young aliens have a mouth consisting of a slit on the underside of the head that goes down the length of the worm. The lips separate to reveal hundreds of teeth that can bite through steel.
|-
|Rock Disease
|Jojo's Bizarre Adventure: Jojolion
|A heredity disease passed down by generation-to-generation in the Higashikata family. The Disease slowly changes person into rock, starting at the age of ten. There is no known medical cure for the disease.
|-
|Sakutia
|DC Comics
|Sakutia, also known as Green Fever, is an extremely rare lethal viral disease found primarily in the African region of Lamumba. The virus attaches itself to a victim's DNA, enabling the host body to instinctively rewrite their own genetic code. Typically, a host relies upon primitive instinct when affecting such a change, rendering them capable of shapeshifting into a wide variety of forms (usually animals). Sakutia victims suffer from one other noticeable side effect: their hair and skin turn permanently green in hue.
|-
|Salt Plague
|Spiritwalker Trilogyby Kate Elliott
|Disease that feeds on the salt in its host's body. The host eventually loses their humanity and becomes violently hungry, seeking the salty blood of others. The plague is spread by its victim's bites.
|-
|Scarlet Plague
|Scarlet Plague by Jack LondonThe Scarlet Plagueby Jack London
|This 1912 novella, also known as the Scarlet Death, is a work of post-apocalyptic fiction treating the world after civilization has been destroyed by this fictional disease.
|-
|Sevai and Vedet
|Always Coming HomeAlways Coming Homeby Ursula K Le Guin
|Genetic diseases of people and animals in the postapocalyptic setting of Always Coming Home, caused by the leftover chemical and radiation pollution. Vedet involves personality disorders and dementia; sevai usually leads to blindness and other sensory loss, along with degeneration of muscle control. Both diseases are painful, crippling, incurable, and fatal. Severity of onset and the length of the course of the illness vary: major damage leads to non-viability in the womb (with a quarter of all children in the Valley being stillborn due to sevai); minor damage might not show up until old age and lead to death in a decade.
|-
|Shame
|The Hitchhiker's Guide to the Galaxyby Douglas Adams
|Mentioned as being "still a terminal disease in some parts of the Galaxy," this disease seems rife amongst the population of Betelgeuse 5, the fifth planet of the sun Betelgeuse. It killed off the father of Ford Prefect when he was so ashamed that Ford could not say his birth name, "Ix", and this embarrassed Ford and resulted in him being mocked during school.
|-
|Shiva
|Rainbow Sixby Tom Clancy
| A genetically modified version of Ebola created to help a group of eco-terrorists to annihilate mankind.
|-
|The Sickness/Imperial bioweapons project I71A/Project: Blackwing
|Death Troopers, Red Harvest|A virus artificially created by the Sith Lord Darth Drear thousands of years ago in order to achieve immortality. Centuries later, Darth Scabrous successfully completed Drear's unfinished work, but accidentally modified it into a semi-sentient hive mind that creates zombies. The Sith academy on Odacin-Fauster was wiped out by the plague. Thousands of years later, Darth Vader commissioned the Empire's bioweapons division to recreate the virus. Upon completion, the virus was loaded onto the Star Destroyer Vector for transport to a testing site. En route, the tanks leaked and the Destroyer's crew was zombified. The virus is characterized by grey goo.
|-
|Snow Crash
|Snow Crashby Neal Stephenson
|A dangerous drug that is both a computer virus capable of infecting the brains of unwary hackers in the Metaverse and a mind-altering virus distributed by a network of Pentecostal churches via its infrastructure and belief system. Both forms cause glossolalia, and the computer virus form appears as a snowy pattern of pixels.
|-
|Solanum Virus
|World War Z/The Zombie Survival Guide by Max Brooks
|A virus that has existed since the beginning of human history, which is highly contagious through bodily fluids such as blood. Solanum symptoms include dementia, paralysis in the extremities, and discoloration of the wound, which increase as the virus replicates itself. The virus is centered on the brain, and destroys the cells of the brain and replaces them with the virus. In doing so, the infected victims are declared clinically dead. The virus takes around sixteen hours to replicate, although it varies from individual to individual. Once Solanum has fully replicated, the victim awakes from the coma, with an unquenchable desire for human flesh. The victim also exhibits typical zombie-like behavior such as psychotic behavior and mindless rage, and can only be killed by destruction of the brain.
|-
|Space plague
|Alisa Selezneva booksby Kir Bulychov
|A lethal, extremely contagious virus responsible for destroying numerous inhabited planets. Difficult to combat due to the virus being very good in mimicry, as well as capable of forming a hive mind that could direct its own mutations. Earth had narrowly averted destruction in the mid-21st Century thanks to the ship carrying two infected being quarantined on Pluto.
|-
|Spattergroit
|Harry Potter and the Order of the Phoenix and Harry Potter and the Deathly Hallows|A disease that covers the victim in purple pustules and renders them unable to speak. It may be a type of fungus, as Ron Weasley says that the effect of being unable to speak occurs "once the fungus has spread to your uvula". The only known cure, according to the portrait of a Healer in St. Mungo's Hospital for Magical Maladies and Injuries, is to bind the liver of a toad around the victim's throat and stand nude in a barrel of eel's eyes under a full moon. The portrait said that he believed Ron had this disease, due to the "unsightly blemishes" on his face. Ron would later use this disease in Harry Potter and the Deathly Hallows as an excuse as to why he was unable to return to Hogwarts, when in actuality he and his friends were out searching for Lord Voldemort's Horcruxes.
|-
| Stand Virus
|JoJo's Bizarre Adventure|Passed down through family, however it is not hereditary. When a person is infected their family members will be infected at around the same time. It comes from a meteorite that was later made into several arrows. The symptoms of the virus are and intense, untreatable fever. If the person infected has enough willpower and survives, they earn a stand ability that is a manifestation of their soul. Not everybody suffers through the fever before obtaining a stand.
|-
|Stone Sickness
|The Edge Chronicles by Paul Stewart and Chris Riddell
|Not a human disease, but one that affects humans and other inhabitants of the Edge by attacking the rocks of the flight ships that are the primary means of transport and communication on the Edge. As the flight ships are carried aloft by the rocks, this puts an end to business and trade, resulting in a brief societal collapse followed by a gradual rebuilding of society when the Edge's inhabitants become accustomed to life with Stone-sickness. Symptoms of infected flight rocks include a brief scar, followed by an open wound and a gaping hole as the rock dissolves. Eventually the sky ship drops clean out of the sky. Many theories abound on the origin of Stone-sickness. Some people blame the gods. Others blame the Mother Storm, the mysterious meteorological creator of the Edge. Some say that the sky pirate captain Cloud Wolf who perished in the Mother Storm somehow infected her and the Stone-sickness is a result of his pestilence. It is only at the end of the series that it is revealed the Gloamglozer created the disease and it had been incubating inside the Stone Gardens ever since he fled the city of Sanctaphrax almost a century before the sickness.
|-
|Stripes
|A Bad Case of Stripes by David Shannon
|An unnamed disease that causes the affected individual to change color/pattern when names of patterns are used. Cured and/or prevented by being yourself, or not hiding a part of yourself. (The girl in the story loves lima beans, but won't admit it for fear of being "weird".)
|-
|Super-smallpox
|Stormbreaker|A genetically engineered version of the smallpox virus that Iraq made in the Gulf War. Herod Sayle used the disease in his plans for vengeance. He genetically modified it so it would kill whoever it infected immediately. Fortunately, the plan was stopped by Alex and the virus was taken and quarantined by MI6. Implied in Snakehead that Sayle acquired the virus (apparently the R-5) from SCORPIA, a SPECTRE-like criminal organisation that sponsored his project.
|-
|"T4 Angel Virus"
|Hollows (series) by Kim Harrison
|The result of genetic engineering, the T4 Angel Virus was spread by infected tomatoes. It wiped out a large percentage of humanity, along with the elves and several other species that had been secretly coexisting. Other species unaffected by the virus, such as witches, vampires, and werewolves, soon equaled humanity's depleted numbers and began living openly. Tomatoes are still feared and shunned by humans throughout the series.
|-
|"Teen Plague"
|Black Hole by Charles Burns
|Also known as the "bug." It is a mutagenic STD that causes grotesque mutations, such as extra body parts, to grow all over the body. Seems to affect only teenagers.
|-
|TS-19
|The Walking Dead|A (presumably viral) disease of unknown origin. When a human is infected by it, the disease will infect all cells, eventually resulting in the clinical death of the host. The symptoms that occur before the victim's clinical death, include fever, headache, fatigue, confusion, hallucinations and paralysis. The disease has a very short incubation period of around 16 hours. After the victim's clinical death, the host will be revitalized and will wake up exhibiting zombie-like behavior. However the disease will only activate lower brain function, mostly those controlled by the brain-stem, where feeding and motor functions are controlled. The host becomes a violent mindless cannibal, and can infect other people by biting. The disease leads to society's collapse and results in a world stricken by a zombie apocalypse.
|-
|V-CIDS
|ImmortalsThe Immortals|An AIDS-like virus.
|-
|Vampiris
|I Am Legendby Richard Matheson
|A bacillus (rod-shaped) bacterium that causes photosensitivity, hysterical blindness near mirrors, overdevelopment of canine teeth, and production of a bulletproof adhesive. Victims feed on blood. While in the body, it is anaerobic, and causes the victim to exhibit vampire-like behavior. Outside the body, it sporulates into dust. If an infected person is cut deep enough, the bacteria turns them into powder. Can be treated, but not cured, with a pill containing a fusion inhibitor and dehydrated blood.
|-
|Venus Particle
|Tyrannosaur Canyon|An extraterrestrial infectious particle found in a lunar rock sample and within a fantastically well-preserved tyrannosaur fossil in the New Mexico desert. It is later revealed that the organism came to Earth via the Chicxulub asteroid that wiped out the dinosaurs. The particle, which was named for its resemblance to the symbol of Venus and femininity, causes rapid mitosis and apparent cellular differentiation in its host.
|-
|Wanderer's Folly
| The Night Paradeby Ronald Malfi
|An inexplicable virus with symptoms of delusions, hallucinations, paranoia, and ultimately death, which affects humans and birds and brings the world close to the brink of extinction while allowing insects to overpopulate. The illness is named after the first few cases, where the infected, lost in daydream-like hallucinations, wandered into traffic.
|-
|Wandering sickness
| The Shape of Things to Comeby H.G. Wells
|A product of biological warfare, the disease in its final stages causes victims to wander about in a zombielike daze; with civilization reduced to that of the Dark Ages the only effective response is to kill any infected before they can spread the contagion to others. The disease was also portrayed in the 1936 film adaptation Things to Come.
|-
|White Blindness
| Blindnessby José Saramago
|A mysterious epidemic of sudden blindness affecting virtually all humanity, leading to society's collapse. So-called because victims see nothing but a white glare. Not to be confused with the White Blindness in Watership Down which is a name the rabbits use for the real illness Myxomatosis that affects rabbits causing blindness and death.
|-
|Wildcard coccus|A Certain Magical Indexby Kazuma Kamachi
|It is a highly virulent killer bacterium. Its method of infection was very complex and it would mix in with other microorganisms and multiply. It could be transmitted via air, blood, mouth, or skin contact. It could grow even more dangerous by combining with Athlete’s foot, Lactobacillus, or other extremely common pathogens.
|-
|White Disease
|White Disease by Karel ČapekThe White Diseaseby Karel Čapek
|An incurable form of leprosy, killing people older than 30.
|-
|White Plague
|White Plague by Frank HerbertThe White Plague by Frank Herbert
|A genetically engineered virus that kills only women. Released only on the Irish, English, and Libyans.
|-
|White Sickness
|Burning Bright by Melissa Scott
| White-Sickness, a pneumatic histopathy, also known as lung‑rot oruhanjao, translatable as "drown‑yourself" in the language of the story's aliens – is classified as a dangerous condition less because it is fatal, which it is, than because it is contagious until treated. Simple organ transplants inevitably fail, due to the mechanisms by which the disease alters the lung tissue, slowly dissolving it into a thick white mucus, so that the patient drowns in body fluids even as the lungs themselves stop working.
|-
|Xenovirus Takis-A
|Wild Cards by George R. R. MartinWild Cards by George R. R. Martin
|Xenovirus Takis-A, also known as the wild card virus, works by completely altering the victim's DNA. It has been theorized that the process is guided by the victim's own subconscious, influenced by the person's desires or fears. In this way, the virus works as a modern Aladdin's Lamp. The transformation is extremely individual, no two persons are affected in exactly the same way. In 90% of cases, the victim's body cannot assimilate the extreme changes, and the person dies horribly. These cases are called black queens. From the survivors, 9 out of 10 are changed for the worse, becoming monstrous creatures nicknamed jokers. The miraculous 1% of infected are changed for the better and become aces, gifted with superhuman physical or mental capabilities while still remaining human in appearance.
|-
|Xenovirus Takis-B
|Wild Cards by George R. R. MartinWild Cards by George R. R. Martin
|Xenovirus Takis-B, also known as the trump virus, is an artificial organism created by Dr. Tachyon as a possible cure for the wild card virus. Ideally, the trump virus reverses the genetic changes caused by the wild card virus, transforming a wild carder back into a normal person. The trump virus is only successful in about twenty-four percent of attempts. Forty-seven percent of the time it doesn't work at all, and an appalling twenty-nine percent of the time, it outright kills the patient. In other words, it is more likely to kill than cure. The Jokertown Clinic only uses the trump virus as a last resort, in the most severe cases where the victim has nothing to lose.
|}
In film
In television
In video games
In role playing games
References
Further reading
Disease in Fiction. Its place in current literature Nestor Tirard, 1886.
Vital Signs Medical Realism in Nineteenth-Century Fiction Lawrence Rothfield, 1992.
Les malades imaginés: Diseases in fiction René Krémer. Journal: Acta Cardiologica, 2003.
No Cure for the Future: Disease and Medicine in Science Fiction and Fantasy Gary Westfahl & George Slusser, 2002.
Nineteenth-Century Narratives of Contagion Allan Conrad Christensen, 2005.
The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases'' Jeff VanderMeer & Mark Roberts (ed).
List of fictional diseases
Diseases
Fictional |
4060580 | https://en.wikipedia.org/wiki/Korat%20Royal%20Thai%20Air%20Force%20Base | Korat Royal Thai Air Force Base | Korat Royal Thai Air Force Base is a base of the Royal Thai Air Force (RTAF) in northeast Thailand, approximately 250 km (157 mi) northeast of Bangkok and about 8 km (5 mi) south of the centre of Nakhon Ratchasima Province (also known as "Khorat" or "Korat"), the largest province in Thailand.
During the Vietnam War, from 1962 to 1975, Korat RTAFB was a front-line facility of the United States Air Force (USAF) in Thailand.
During the 1980s and early-1990s, the airfield was jointly operated as a civil airport for Nakhon Ratchasima. This ended with the opening of Nakhon Ratchasima Airport in the early-1990s.
Units
Korat RTAFB is the home of the 1st RTAF Wing, consisting of three (101, 102, 103) squadrons. The airfield has a single 9,800 + foot runway with a single, full-length parallel taxiway.
102 Squadron flies 15 F-16A-15ADF and one F-16B-15ADF Fighting Falcon air defense airplanes acquired from the USAF and delivered to the RTAF in 2003 and 2004. These airplanes were acquired under the code name "Peace Naresuan IV".
103 Squadron flies eight F-16A and four F-16B acquired under the code name "Peace Naresuan I", five F-16A (of six delivered) under the code name "Peace Naresuan XI", and three F-16A and four F-16Bs acquired from the Republic of Singapore Air Force and delivered in late 2004. All F-16s are the block 15 version.
A detachment of 1 UH-1H Iroquois helicopters from 203 Squadron, Wing 2 is also based at Korat.
Cope Tiger
Korat RTAFB is a major facility for the Cope Tiger exercises, an annual, multinational exercise conducted in two phases in the Asia-Pacific region.
Cope Tiger involves air forces from the United States, Thailand, and Singapore, as well as U.S. Marine Corps aircraft deployed from Japan. US naval aircraft have also been involved in Cope Tiger. The flying training portion of the exercise promotes closer relations and enables air force units in the region to sharpen air combat skills and practice interoperability with US forces. Pilots fly both air-to-air and air-to-ground combat training missions.
Participating American aircraft have included the A-10 Thunderbolt II, F-15C/D Eagles, F-15E Strike Eagles, F/A-18A/C Hornets, F/A-18E/F Super Hornets, F-16C/D Fighting Falcons, E-3B/C Sentry Airborne Warning and Control Systems (AWACS) aircraft, KC-135 Stratotanker aerial refueling aircraft, C-130H Hercules airlift aircraft and HH-60G Pave Hawk helicopters.
Thai Forces fly F-16A/B Fighting Falcons, F-5E Tigers and ground attack L-39's, and Alpha Jets of 231 Squadron. Singaporean forces fly F-5Es, F-16C/D Fighting Falcons, KC-130B Hercules, E-2C Hawkeye, CH-47SD Chinooks and AS-532UL Cougars.
More than 1,100 people participate, including approximately 500 US service members and 600 service members from Thailand and Singapore.
Over the last few years, Cope Tiger has widened to include CSAR (Combat Search and Rescue) assets and in 2007 for the first time RTAFB Udon Thani was also used as a base during this exercise. These included a C-130E Hercules from 36 Airlift Squadron, 374 Airlift Wing (based at Yokota AB, Japan) in 2006, and a G-222 and a C-130H from the RTAF in 2007.
Since the 1980s United States Marine Corps F/A-18C Hornet fighters have used Korat as a base during Cobra Gold exercises.
History
The origins of Korat Air Base dates back to the Japanese Occupation of Thailand during World War II. The Japanese Army established facilities on the land later used to build Korat Air Base, and a small support airfield was established there for logistics support of the facility and for the Japanese occupation forces in the area. After the end of the war, the facilities were taken over by the Thai government as a military base. Various Japanese facilities were used by the RTAF (including the airfield control tower) until the 1960s.
In 1961, the Kennedy administration feared a communist invasion or insurgency inside Thailand would spread from the Laotian Civil War. Political considerations with regards to the communist threat led the Thai government to allow the United States to covertly use five Thai bases for the air defense of Thailand and to fly reconnaissance flights over Laos under a "gentleman's agreement" with the United States. An advisory force of Army personnel was sent to Thailand and their first reports indicated that significant infrastructure improvement in the country would be needed in order for US forces to land in the Gulf of Siam and move north to the expected invasion areas along the Mekong River between Laos and Thailand.
The United States Army Corps of Engineers were deployed and established a headquarters at the RTAF airfield that later became Korat RTAFB. The first facilities were built on the north side of the runway (). They included a hospital, some barracks and some warehouses for equipment that was flown in using the existing runway. Under the agreement, United States forces using Thai air bases were commanded by Thai officers. Thai air police controlled access to the bases, along with USAF Security Police, who assisted them in base defense using sentry dogs, observation towers, and machine gun bunkers. The Geneva Accords of 1962 ended the immediate threat, but both Camp Friendship and Korat RTAFB were developed as part of the buildup of forces in Southeast Asia during the Vietnam War.
The USAF mission at Korat RTAFB began in April 1962, when one officer and 14 airmen were temporarily assigned to the existing base as the joint US Military Advisory Group (JUSMAG). The army was engaged in the construction of Camp Friendship. Once completed, army forces moved into Camp Friendship, turning the facilities north of the Korat RTAFB runway over to the Thai armed forces.
South of the existing runway, construction of a large air base was begun to support a full USAF combat wing. In July 1964, approximately 500 airmen and officers were deployed to begin construction, and the completion of essential base facilities was completed by October 1964, although due to its primitive nature, the air force living area was known for several years as "Camp Nasty" in counterpoint to the Army facility at Camp Friendship. The army retained a portion of the aircraft parking ramp for logistical support of Camp Friendship. The APO for Korat RTAFB was APO San Francisco, 96288
US advisory forces
The first USAF units at Korat were under the command of the US Pacific Air Forces (PACAF). Korat was the location for TACAN station Channel 125 and was referenced by that identifier in voice communications during air missions. The mission of the USAF at Korat was to conduct operations in support of US commitments in Southeast Asia: North Vietnam, South Vietnam, Cambodia, and Laos. During the Vietnam War, pilots from Korat RTAFB primarily flew interdiction, direct air support, armed reconnaissance, and fighter escort missions.
In mid-June 1964 2 HU-16s of the 33d Air Rescue Squadron were deployed to Korat to act as airborne rescue control ships in support of Yankee Team bombing operations over Laos. They would remain at Korat until June 1965 when they were moved to Udorn RTAFB and then to Da Nang Air Base in South Vietnam and replaced at Korat by HC-54s.
In response to the Gulf of Tonkin Incident on 31 July 1964, the 6441st Tactical Fighter Wing at Yokota Air Base, Japan deployed 8 F-105D Thunderchiefs of the 36th Tactical Fighter Squadron to Korat on 9 August and commenced operations the following day. The 36th TFS remained at Korat until 29 October then returned to Japan. It was replaced by the 469th Tactical Fighter Squadron, also flying F-105Ds, which was deployed from the 388th Tactical Fighter Wing. From 30 October through 31 December 1964, F-105s from the 80th Tactical Fighter Squadron were deployed from the 41st Air Division, Yokota AB, Japan.
On 14 August 2 HH-43Bs were deployed to Korat to provide base search and rescue. In mid-1965 this unit was redesignated Detachment 4 38th Air Rescue Squadron.
In December 1964, the 44th Tactical Fighter Squadron deployed to Korat from Kadena AB, Okinawa. The 44th would rotate pilots and personnel to Korat on a Temporary duty assignment (TDY) basis from 18 December 1964 – 25 February 1965, 21 April–22 June 1965 and 10–29 October 1965.
The 44th TFS returned to Kadena AB, Okinawa and assignment to the 18th TFW, but on 31 December 1966, it became only a paper organization without aircraft. The high loss rate of the F-105s in the two combat wings at Korat and Takhli RTAFB required the squadron to send its aircraft to Thailand as replacement aircraft. The 44th remained a "paper organization" until 23 April 1967, when it returned to Korat, absorbing the personnel, equipment and resources of the 421st TFS.
6234th Tactical Fighter Wing (Provisional)
In April 1965, the 6234th Air Base Squadron was organized at Korat as a permanent unit under the 2d Air Division to support the TDY fighter units and their operations. This squadron was in existence until the end of April when it was discontinued and the 6234th Combat Support Group, the 6234th Support Squadron, and the 6234th Material Squadron were designated and organized as a result of a 3 May 1965 Pacific Air Forces (PACAF) special order.
The 6234th Tactical Fighter Wing (Provisional) was activated in April 1965 as part of the 2d AD with Colonel William D. Ritchie, Jr. as commander. The wing had responsibility for all air force units in Thailand until permanent wings were established at other bases.
Known deployed squadrons to Korat attached to the 6234th TFW were:
67th Tactical Fighter Squadron (F-105D) February–December 1965
12th Tactical Fighter Squadron (F-105D) February–August 1965
357th Tactical Fighter Squadron (F-105D) 12 June-8 November 1965 when it was reassigned to Takhli RTAFB.
469th Tactical Fighter Squadron (F-105D) remained on TDY at Korat until 15 November 1965 when it was permanently assigned to the 6234th.
68th Tactical Fighter Squadron (F-4C Phantom II) 25 July - 6 December 1965. This was part of the first deployment of the Phantom II to Southeast Asia, with two other squadrons (47th and 431st TFS) deploying to Ubon RTAFB. The squadron specialized in NIGHT OWL (night strike and flare) tactics and this was their main mission at Korat.
421st Tactical Fighter Squadron (F-105D) 20 November 1965 on.
Wild Weasel Detachment (former 531st Tactical Fighter Squadron) (F-100F Super Sabre) November 1965 – July 1966.
On 3 April 1965 the 67th TFS launched the first unsuccessful US airstrike against the Thanh Hóa Bridge.
In 1965, the 6234th TFW and its subordinate units operating F-100s, F-105s, and F-4Cs flew 10,797 sorties totalling 26,165 hours. The wing's efforts merited the Presidential Unit Citation in March 1968.
388th Tactical Fighter Wing
After a series of TDY deployments of F-105s to Korat, on 14 March 1966 the 388th Tactical Fighter Wing was activated and on 8 April was organised to replace the provisional PACAF 6234th TFW which was inactivated.
By 1967, Korat RTAFB was home to as many as 34 operating units and about 6,500 USAF airmen. Korat also housed components of the RTAF and a detachment of No. 41 Squadron RNZAF New Zealand Bristol Freighters. The annual cost for base operations and maintenance was about US$12,000,000. The monthly average expenditure for munitions was on the order of US$4,360,000.
F-105 Thunderchief operations
The 388th TFW initially consisted of two F-105 Thunderchief squadrons, the 421st Tactical Fighter Squadron and the 469th Tactical Fighter Squadron. On 15 May 1966 the 44th Tactical Fighter Squadron was permanently attached to the 388th. The 421st and 469th Tactical Fighter Squadrons flew single-seat F-105Ds, while the 44th flew the two-seat F-105F.
Also on 15 May, an F-4C Phantom II squadron, the 34th Tactical Fighter Squadron and an F-105F squadron, the 13th Tactical Fighter Squadron were deployed and permanently attached to the 388th from the 347th TFW, Yokota AB, Japan and Kadena AB, Okinawa.
The 388th TFW lost 48 aircraft in combat during 1967. Seven others were lost due to non-combat reasons. Forty-three pilots and electronic warfare officers (EWO) were listed as killed (KIA) or missing in action (MIA). Fifteen were rescued.
In March 1967 F-105s from the 388th TFW carried out the first attacks on North Vietnam's Thái Nguyên ironworks, destroying its power plant on 16 March. On 11 August 1967 388th TFW F-105s participated in the first attack on the Paul Doumer Bridge in Hanoi which successfully destroyed one span of the bridge.
The high attrition rate of F-105Ds in Southeast Asian operations soon became a problem. The conversion of USAFE units to the F-4D Phantom enabled some of the European-based F-105Ds to be transferred to Southeast Asia, but this was not sufficient to offset the heavy attrition rate. On 23 April 1967, the 421st TFS was re-designated the 44th Tactical Fighter Squadron. In October 1967 the 44th TFS absorbed the mission and makeup of 13th TFS. The 13th was transferred to Udorn RTAFB to become an F-4D Phantom unit. Its aircraft and personnel were absorbed by the 44th TFS. With these re-organizations, the 44th TFS possessed both D and F model Thunderchiefs. The squadron's primary mission became one of flying escort to the wing's regular strike force to suppress anti-aircraft artillery (AAA) and surface-to-air missile (SAM).
On 22 December 1967 President Lyndon Johnson visited Korat RTAFB, spending the night at the base.
Wild Weasels
The Wild Weasel concept was originally proposed in 1965 as a method of countering the increasing North Vietnamese SAM threat, using volunteer crews. The mission of the Wild Weasels was to eliminate SAM sites in North Vietnam.
In early 1966, standard F-105Ds with no special electronic countermeasures (ECM) equipment accompanied F-100 Wild Weasel I aircraft equipped with basic ECM equipment. In general, the F-100 would identify the SAM site and the F-105Ds would fly the strike. The mission gradually evolved with the addition of new weapons and ECM equipment until the F-4 replaced the F-100 and the F-105D was replaced by the more capable and specialized two-place F-105F and G models.
F-105F/G Wild Weasel SAM Anti-Radar squadrons assigned to the 388th TFW were:
13th Tactical Fighter Squadron, 15 May 1966 (F-105F)
Activated at Korat, aircraft being deployed from the 41st Air Division in Japan
Inactivated October 1967, aircraft assigned to 44th TFS.
Designation reassigned to 8th TFW, Udorn RTAFB and reequipped with F-4Ds.
Detachment 1, 12th Tactical Fighter Squadron
Formed with F-105Fs transferred from inactivating 333d, 354th and 357th TFS at Takhli RTAB 24 September 1970, aircraft at Korat in TDY status from 18th TFW, Kadena AB, Okinawa
Re-designated: 6010th Wild Weasel Squadron and PCS to 388th TFW: 1 November 1970
Re-designated: 17th Wild Weasel Squadron: 1 December 1971 – 15 November 1974
F-105G November 1970 – December 1974
Detachment 1, 561st Tactical Fighter Squadron
TDY from George Air Force Base California, F-105G, 2 January – 5 September 1973
The tactics employed on the Iron Hand missions were primarily designed to suppress the SA-2 SAM and gun-laying radar defenses of North Vietnam during the ingress, attack, and egress of the main strike force. In the suppression role, AGM-45 Shrike missiles were employed to destroy, or at least harass, the SA-2 and/or fire control radar which guided the SA-2 missiles.
On 23 April 1967 the 44th TFS's primary mission became one of flying escort to the wing's regular strike force to suppress AAA and SAM fire as a Wild Weasel squadron.
The 12th TFS was equipped with the F-105G and was temporarily reassigned to Takhli in June 1967. The detachment returned to its main unit at Korat and the 44th TFS was returned to Korat in September 1970 from the 355th TFW to the 388th TFW when the decision was made to consolidate the units of the Wild Weasel mission. With their return, the 6010th Wild Weasel Squadron was formed. The squadron was redesignated the 17th Wild Weasel Squadron on 1 December 1971.
In February 1972, the 67th TFS returned on temporary duty to Korat from Kadena AB, this time being equipped with the EF-4C aircraft. The EF-4C was the initial Wild Weasel version of the Phantom. It was a modified version of the F-4C, designed in parallel with the F-105G Wild Weasel program. The EF-4Cs suffered from certain deficiencies which limited their combat effectiveness. For example, they were unable to carry the standard ARM. Consequently, the EF-4C was seen only as an interim Wild Weasel aircraft, pending the introduction of a more suitable type. In February 1973, after the end of combat operations in Vietnam, the 67th TFS with its EF-4C Wild Weasels were withdrawn and returned to Kadena.
F-4 Phantom II operations
In mid-1968 it was decided to make the 388th an F-4 wing, and also to equip the 388th with the new F-4E and the F-105s would be transferred to Takhli and all of the F-105s in the fighter-bomber mission would be consolidated there. The Wild Weasels would remain at Korat along with the F-4s in their specialized mission.
On 17 November 1968, an F-4E squadron from Eglin AFB, Florida, replaced the single-seat F-105D Thunderchiefs of the 469th TFS. The new Phantom squadron, the first E-models in Thailand, retained the designation 469th TFS.
On 10 May 1969, the 34th Tactical Fighter Squadron was transferred organizationally to the 347th TFW at Yokota AB, Japan, but it remained attached to the 388th TFW at Korat. It was re-equipped with F-4Es on 5 July.
On 15 October 1969, the F-105-equipped 44th Tactical Fighter Squadron was transferred and reassigned to the 355th TFW at Takhli RTAFB.
On 12 June 1972, the 35th Tactical Fighter Squadron flying F-4Ds was deployed from the 3rd TFW, Kunsan Air Base, South Korea, in a "Constant Guard" redeployment to support operations over North Vietnam during Operation Linebacker. They remained until 10 October 1972 when they returned to Korea.
College Eye Task Force
An expansion of combat operations from Korat initiated with the arrival of EC-121 Warning Stars of the College Eye Task Force (later designated Det 1, 552d Airborne Early Warning and Control Wing) from Ubon RTAFB and EC-121R Batcats of the 553rd Reconnaissance Wing. The initial College Eye support team personnel arrived at Korat on 20 September 1967. Less than a month later, on 17 October the first seven EC-121D aircraft redeployed from Ubon, followed two days later by the arrival of the Batcat EC-121Rs.
The EC-121Ds provided airborne radar coverage and surveillance in support of aircraft flying combat operations. Combat reconnaissance missions of the 552d resumed on 25 November 1967. These missions normally required the aircraft to be on station for eight hours. Including transit time to and from station, an average flight was typically about 10 hours, and the force ranged between five and seven aircraft at any one time.
The mission of the 20 EC-121Rs was to detect and interdict the flow of supplies from North Vietnam down the Ho Chi Minh Trail to the People's Army of Vietnam and Viet Cong forces in South Vietnam. Their primary objective was to create an anti-vehicle barrier. If the vehicles could be stopped, then a major quantity of enemy supplies would be halted.
In November 1970, the 553d RW was inactivated. The 554th RS transferred to Nakhon Phanom RTAFB to operate QU-22 Baby Bats, while the 553rd RS remained at Korat with 11 Batcats until December 1971, when it returned to Otis AFB, Massachusetts.
Det. 1 remained at Korat until June 1970, when it left Thailand. It returned in November 1971, now known as Disco, after North Vietnamese MiGs threatened B-52s and other aircraft operating in southern Laos. It remained at Korat, supporting Operation Linebacker, Operation Linebacker II and other USAF operations, until 1 June 1974, when it returned to McClellan AFB, California.
B-66 Destroyer operations
EB-66s were transferred to Takhli RTAFB in late November 1965 and were used as electronic warfare aircraft, joining strike aircraft during their missions over North Vietnam to jam enemy radar installations. They were not Wild Weasel aircraft, since they did not have the means to attack radar installations directly.
In September 1970, the 42nd Tactical Electronic Warfare Squadron, which flew EB-66s, transferred to Korat from Takhli. The EB-66C/E flew radar and communications jamming missions to disrupt enemy defenses and early warning capabilities.
On 2 April 1972, an EB-66C Bat 21 was shot down over South Vietnam near the Vietnamese Demilitarized Zone during the Easter Offensive. Lt Col. Iceal Hambleton was the only crewmember able to eject, which set into motion an 11 1/2-day search and rescue operation.
Airborne command and control mission
On 30 April 1972 the 7th Airborne Command and Control Squadron (ACCS) was assigned to the 388th TFW from Udon RTAFB and began flying missions in its EC-130E Hercules aircraft, which were equipped with command and control capsules.
The 7th ACCS played an important role in the conduct of air operations. The squadron had a minimum of two aircraft airborne 24 hours a day directing and coordinating the effective employment of tactical air resources throughout Southeast Asia. Its aircraft functioned as a direct extension of ground-based command and control authorities, the primary mission was providing flexibility in the overall control of tactical air resources. In addition, to maintain positive control of air operations, the 7th ACCS provided communications to higher headquarters. The battle staff was divided into four functional areas: command, operations, intelligence, and communications. Normally, it included 12 members working in nine different specialties. Radio call signs for these missions were Moonbeam, Alleycat, Hillsboro and Cricket.
A-7D Corsair II
On 29 September 1972, the 354th Tactical Fighter Wing, based at Myrtle Beach AFB South Carolina, deployed 72 A-7D Corsair II of the 353rd, 354th, 355th and the 356th Tactical Fighter Squadrons to Korat for a 179-day TDY. By mid-October, 1,574 airmen from Myrtle Beach had arrived as part of "Constant Guard IV".
In addition to strike missions during Operations Linebacker and Linebacker II, A-7Ds of the 354th assumed the combat search and rescue "Sandy" role from the A-1 Skyraider in November 1972 when the remaining Skyraiders were transferred to the Republic of Vietnam Air Force.
In March 1973 A-7D aircraft were drawn from the deployed 354th TFW squadrons and assigned to the 388th TFW as the 3d Tactical Fighter Squadron. Some TDY personnel from the 354th TFW were assigned to the 388th and placed on permanent party status.
The 354th TFW Forward Echelon at Korat also became a composite wing. Along with the Myrtle Beach personnel, elements of the 355th Tactical Fighter Wing from Davis-Monthan AFB Arizona were deployed to support the A-7D aircraft, being replaced by A-7Ds from the 23d Tactical Fighter Wing from England AFB. These airmen rotated on 179-day assignments (the limit for TDY assignments) to Korat from these continental United States bases until early 1974.
In March 1972 the 39th Aerospace Rescue and Recovery Squadron moved to Korat from Cam Ranh Air Base. The unit was dissolved on 1 April being temporarily redesignated Detachment 4, 3rd Aerospace Rescue and Recovery Group before being redesignated as the 56th Aerospace Rescue and Recovery Squadron on 8 July and absorbing the HH-43 detachment at Korat.
1973 operations in Laos and Cambodia
The Paris Peace Accords were signed on 27 January 1973 by the governments of North Vietnam, South Vietnam, and the United States with the intent to establish peace in Vietnam. The accords effectively ended United States military operations in North and South Vietnam. Laos and Cambodia, however, were not signatories to the Paris agreement and remained in states of war.
The US was helping the Royal Lao Government achieve whatever advantage possible before working out a settlement with the Pathet Lao and their allies. The USAF flew 386 combat sorties over Laos during January and 1,449 in February 1973. On 17 April, the USAF flew its last mission over Laos, attacking a handful of targets requested by the Laotian government.
In Cambodia the USAF carried out a massive bombing campaign to prevent the Khmer Rouge from taking over the country.
Congressional pressure in Washington grew against these bombings, and on 30 June 1973, the United States Congress passed Public law PL 93-50 and 93-52, which cut off all funds for combat in Cambodia and all of Indochina effective 15 August 1973. Air strikes by the USAF peaked just before the deadline, as the Khmer National Armed Forces engaged a force of about 10,000 Khmer Rouge encircling Phnom Penh.
At 11:00 15 August 1973, the Congressionally-mandated cutoff went into effect, bringing combat activities over the skies of Cambodia to an end. A-7 and F-4s from Korat flew strike missions sometimes less than 16 km (10 mi) from Phnom Penh that morning before the cutoff. The final day marked the conclusion of an intense 160-day campaign, during which the USAF expended 240,000 tons of bombs. At Korat, two A-7D pilots from the 354th TFW returned from flying the last USAF combat mission over Cambodia.
Consolidation and inactivation
With the end of active combat in Indochina on 15 August 1973, the USAF began drawing down its Thailand-based units and closing its bases.
The 388th TFW entered into intensive training program to maintain combat readiness and continued to fly electronic surveillance and intelligence missions. The F-4 and A-7 aircraft practiced bombing and intercept missions in western Thailand. A large exercise was held on the first Monday of every month, involving all USAF units in Thailand. Commando Scrimmage covered skills such as dogfighting, aerial refuelling, airborne command posts and forward air controllers. The A-7D aircraft were pitted against the F-4 aircraft in dissimilar air combat exercises. These missions were flown as a deterrent to North Vietnam as a signal that if the Paris Peace Accords were broken, the United States would use its air power to enforce its provisions.
A drawdown of forces in Thailand was announced in mid-1974. With the closure of Takhli RTAFB the 347th Tactical Fighter Wing and 428th Tactical Fighter Squadron and the 429th Tactical Fighter Squadron each equipped with the F-111 were moved to Korat on 12 July 1974. Later that month, the 16th Special Operations Squadron equipped with AC-130 Spectre gunships was moved to Korat from Ubon RTAFB.
On 15 March 1974, the EB-66s of the 42nd Tactical Electronic Warfare Squadron were sent to AMARC and the squadron was inactivated.
The 354th Tactical Fighter Wing ended its rotating deployments to Korat on 23 May 1974 and returned its A-7D squadrons (353rd and 355th TFS) and aircraft to Myrtle Beach Air Force Base.
The EC-130s and personnel of 7th ACCS were transferred to the 374th Tactical Airlift Wing at Clark Air Base, Philippines on 22 May 1974.
The 552nd AEW&C returned to McClellan AFB California in June 1974, ending the College Eye mission.
On 15 November 1974, the F-105F/G's of the 17th WWS were withdrawn and transferred to the 562d TFS/35 TFW at George Air Force Base, California.
The wars in Cambodia and Laos, however continued. With the political changes in the US during 1974, and the resignation of President Nixon, the air power of the United States at its Thailand bases did not respond to the collapse of the Lon Nol government to the Khmer Rouge in Cambodia during April 1975 nor to the takeover of Laos by the Pathet Lao. Ultimately, the North Vietnamese invasion of South Vietnam during March and April 1975 and the collapse of the Republic of Vietnam also was not opposed militarily by the US.
The only missions flown were aircraft of the 388th TFW providing air cover and escort during Operation Eagle Pull, the evacuation of Americans from Phnom Penh, Cambodia and Operation Frequent Wind the evacuation of Americans and at-risk Vietnamese from Saigon, South Vietnam.
On 14–15 May 1975, aircraft assigned to Korat provided air cover in what is considered the last battle of the VietnamWwar, the recovery of the SS Mayaguez after it was hijacked by the Khmer Rouge.
With the fall of both Cambodia and South Vietnam in April 1975, the political climate between Washington and the government of PM Sanya Dharmasakti had soured. Immdiately after the news broke of the use of Thai bases to support the Mayaguez rescue the Thai Government lodged a formal protest with the US and riots broke out outside the US Embassy in Bangkok. The Thai government wanted the US out of Thailand by the end of the year. The USAF implemented Palace Lightning, to withdraw its aircraft and personnel from Thailand.
On 30 June 1975, the 347th TFW F-111As and the 428th and 429th TFS were inactivated. The F-111s were sent to the 422d Fighter Weapons Squadron at Nellis Air Force Base, Nevada. The 347th became an F-4E wing at Moody Air Force Base, Georgia.
In late 1975, there were only three combat squadrons at Korat, consisting of 24 F-4Ds of the 34th TFS, 24 A-7Ds of the 3rd TFS, and six AC-130H "Spectre" aircraft of the 16th Special Operations Squadron. The 34th TFS shut down, and flew their aircraft to Hill AFB, Utah, in December of that year.
The 16th Special Operations Squadron transferred to Hurlburt Field, Florida on 12 December 1975
The 3rd Tactical Fighter Squadron was transferred to Clark AB, Philippines on 15 December
On 23 December 1975, the 388th TFW and its remaining squadron, the 34th TFS, transferred to Hill AFB, Utah.
After the departure of the 388th TFW, the USAF retained a small flight of security police at Korat to provide base security and to deter theft of equipment until the final return of the base to the Thai Government.
The USAF officially turned Korat over to the Thai Government on 26 February 1976.
Other major USAF units assigned
Det. 17, 601st Photo Flight (MAC), (HQ - 600th Photo Squadron)
1974th Communications Squadron and Group (Tenant AFCS)
1998th Communications Squadron (Tenant AFCS)
American Forces Thailand Network (Tenant AFRTS)
Detachment 7, 6922 Security Wing
RTAF use after 1975
After the US withdrawal in 1976, the RTAF consolidated the equipment left by the departing USAF units in accordance with government-to-government agreements, and assumed use of the base at Korat. The American withdrawal had quickly revealed to the Thai Government the inadequacy of its air force in the event of a conventional war in Southeast Asia. Accordingly, in the 1980s the government allotted large amounts of money for the purchase of modern aircraft and spare parts.
Thirty-eight F-5E and F-5F Tiger II fighter-bombers formed the nucleus of the RTAF's defense and tactical firepower. The F-5Es were accompanied by training teams of American civilian and military technicians, who worked with members of the RTAF.
In addition to the F-5E and F-5F fighter-bombers, OV-10C counter-insurgency aircraft, transports, and helicopters were added to the RTAF inventory. In 1985 the United States Congress authorized the sale of the F-16 fighter to Thailand.
By the late 1980s, Korat, Takhli, and Don Muang RTAFB outside Bangkok, which was shared with civil aviation, were the primary operational holdings of the RTAF. Maintenance of the facilities at other bases abandoned by the United States (Ubon, Udorn) proved too costly and exceeded Thai needs and were turned over to the Department of Civil Aviation for civil use. Nakhon Phanom and U-Tapao were placed under the control of the Royal Thai Navy. Nonetheless, all runways on the closed or transferred airfields were still available for military training and emergency use.
Camp Friendship (United States Army)
Adjacent to Korat RTAFB to the south was United States Army Camp Friendship. It was a separate facility which pre-dated Korat RTAFB.
Camp Friendship was the home of Headquarters, United States Army Support, Thailand (USARSUPTHAI), part of the Army Military Assistance Command Thailand (MACTHAI). The facility was initially set up as a forward operating base for equipment storage of the 25th Infantry Division, which would have deployed to Thailand in the event of invasion. The USAF would be able to airlift the division into Korat where they could pick up their equipment and move into battle.
The host unit was the 44th Engineer Group (Construction), part of the 9th Logistics Command. It was a large facility (larger than Korat RTAFB) complete with support offices, barracks for about 4,000 personnel, enlisted, NCO, and officer clubs, a motor pool, a large hospital, athletic fields, and other facilities. It was assigned APO San Francisco 96233.
Its mission was to build roads and a support (logistics) network in support of US Army and USAF operations in Thailand by executing the troop construction portion of the military construction program, performing engineer reconnaissance, and accomplishing civil action projects as resources permitted. The group constructed the Bangkok By-Pass Road, a 95 km asphalt highway between Chachoengsao and Kabin Buri, which was opened in February 1966. For their performance in the construction of this road (now Route 303), the 809th Engineer Battalion (Construction) and the 561st Engineer Company (Construction) were awarded Meritorious Unit Commendations.
As soon as the Bangkok bypass road paving was completed, Company B moved to Sattahip to begin construction of Camp Vayama, a 1,000-man troop cantonment area which would eventually become part of a vast port and logistical complex. Joined by Company C in the later part of May, construction continued. In August, the main portion of Company C was moved to Sakon Nakon where it built a troop cantonment area, a special forces camp, and a POL tank farm at Nakom Phanom (NKP) in support of the air force.
On 3 January 1967, Company C returned to Phanom Sarakam to begin work on the "inland road", a 122-kilometer, all-weather highway which would connect the Port of Sattahip with the Bangkok bypass road. Upon its completion, the inland road became a vital contribution to the economic development of Thailand and served as an important link in the supply and communication lines between the Gulf of Siam and northeast Thailand.
In 1970, the 44th Engineer Group was inactivated in Thailand as part of the draw down of United States forces in Southeast Asia. Camp Friendship closed as a separate facility in 1971 and much of the facility was turned over to the Royal Thai Army. After its closure, the USAF retained some barracks and personnel support facilities. The 388th Tactical Fighter Wing used those parts of Camp Friendship for overflow of personnel assigned or deployed to it until the USAF turned Korat Air Base over to the RTAF in early 1976.
Today, Camp Friendship is a Royal Thai Army artillery base. Some of the old US facilities are still in use, and some new construction has also been erected.
Major organizations assigned to Camp Friendship were
HHC 9th Logistics
HHC USARSUPTHAI
HQ 809th Engineer Battalion
HQ USARSUPTHAI Liaison
US Embassy Attache Office
USARSUPTHAI
USASTRATCOM SIG Battalion
USASCCCCA
7th Airlift Platoon
7th MAINT Battalion, Direct Support 1965–71
9th Logistical Command HHD Logistics Support 1963–70
9th Logistics Pad 55/56
13th MP Company, Separate 1969–73
21st MED Depot Medical 1967–70
28th Signal Company
31st MED Field Hospital 1962–70
33rd Transportation TC
35th Finance Sec Disb
40th MP Battalion, Military Police Support 1967–70
41st ORD Company, Direct Ammunition Support 3/1966-9/1966
44th Engineer Group, HHC/HHD Construction 1962–70
46th Special Forces (SF)
55th Signal Company
57th MAINT Company, Direct Support 1963–71
57th Ordinance Company DS
70th Aviation Detachment
93rd Psyops Co
128th Medical Battalion
133rd MED Group, HHD Medical Support 1968–70
172nd Transportation Detachment
219th MP Company, Physical Security 1966–71
256th AG Company Personnel 1967–71
258th Transportation Detachment
260th Transportation Company TC
270th Transportation Detachment
270th Ordnance Detachment
281st MP Company
291st Transportation Company TC
313th Transportation Company TC
331st Sup Co (SUP-DEP) *1964–66*
331st Supply Depot
379th Signal Battalion
428th MED Battalion, HHD Medical Support 1966–68
442nd Signal Battalion 1967–71
501st Field Depot
513th MP Det
519th Transportation Battalion
528th Engineer Detachment (Utilities) *change (28 August 2011)
538th Engineer Battalion, Construction 1965–70
558th Supply Company
561st Engineer Company (Construction)
590th Supply & Service (DS)
590th QM Company (DS) 1964–65
593rd EN Company, Construction 6/1963-8/1963
597th MAINT Company, Direct Support 1966–69
697th EN Company, Pipeline Construction Support 1965–69
720th Military Police Battalion
738th Engineer Support Company, Supply Point *1963–65*
809th Engineer Battalion
999th Engineer Battalion
See also
United States Air Force in Thailand
United States Pacific Air Forces
Seventh Air Force
Thirteenth Air Force
References
Bibliography
Endicott, Judy G. Active Air Force wings as of 1 October 1995; USAF active flying, space, and missile squadrons as of 1 October 1995. Maxwell AFB, Alabama: Office of Air Force History, 1999. CD-ROM.
Glasser, Jeffrey D. The Secret Vietnam War: The United States Air Force in Thailand, 1961–1975. McFarland & Company, 1998. .
Martin, Patrick. Tail Code: The Complete History of USAF Tactical Aircraft Tail Code Markings. Schiffer Military Aviation History, 1994. .
Logan, Don. The 388th Tactical Fighter Wing: At Korat Royal Thai Air Force Base, 1972. Atglen, Pennsylvania: Schiffer Publishing, 1997. .
USAAS-USAAC-USAAF-USAF Aircraft Serial Numbers—1908 to present
The Royal Thai Air Force (English Pages)
Royal Thai Air Force – Overview
External links
Official site of 1st Wing, RTAF
Photos Of Camp Friendship – US Army Support Command, Thailand
My 1966–67 photos on base and off base action.
Retaking The Mayagüez – The final battle of the Vietnam War
Official Royal Thai Air Force Website
Hill AFB, Utah. Home of the 388th FW
The Vietnam War Years of Korat Royal Thai Air Base website
Korat Air Base Thailand and Camp Friendship 1965–1970 (Video)
Life on Korat AFB (Video)
Royal Thai Air Force bases
Buildings and structures in Nakhon Ratchasima
Closed facilities of the United States Air Force in Thailand
1955 establishments in Thailand |
23473461 | https://en.wikipedia.org/wiki/Robert%20Kenner | Robert Kenner | Robert Kenner is an American film and television director, producer, and writer. Kenner is best known for directing the film Food, Inc. as well as the films, Command and Control, Merchants of Doubt, and When Strangers Click.
Kenner's most recent project is 2019's five-part documentary series The Confession Killer, which examines notorious serial killer Henry Lee Lucas and what may be the greatest hoax in American criminal justice history.
In 2016, Kenner released Command and Control, a documentary of a 1980s nuclear missile accident in Arkansas, based on Eric Schlosser's award-winning book of the same name. The Village Voice wrote, “Command and Control is frightening for a whole pants-shitting list of reasons…morbidly fun to watch, in the manner of good suspense thrillers and disaster films.”
In 2014, he released Merchants of Doubt, inspired by Naomi Oreskes' and Erik Conway's book of the same name. The film explores how a handful of skeptics have obscured the truth on issues from tobacco smoke, to toxic chemicals, to global warming. The Nation described Merchants of Doubt as "like a social-issues documentary by Samuel Beckett. You laugh as you contemplate everyone's doom".
In 2011, Kenner released When Strangers Click for HBO. The film was nominated for an Emmy Award. The New York Times wrote, “Reserving judgment, the film beautifully explores the poignant nature of [one couple's] ambivalence toward solitude.”
In 2008, he produced and directed the Oscar nominated, Emmy winning documentary film, Food, Inc., which examines the industrialization of the American food system and its impacts on workers, consumers, and the environment. Variety wrote that Food, Inc. “does for the supermarket what Jaws did for the beach.”
In 2003, Kenner worked as co-filmmaker with Richard Pearce on The Road to Memphis for Martin Scorsese’s series, The Blues. Newsweek called the film, “the unadulterated gem of the Scorsese series.”
Kenner has directed and produced numerous films for the award-winning PBS documentary series, American Experience including Two Days In October, which received a Peabody Award, an Emmy, and a Grierson award.
He has directed and produced several films for National Geographic including America’s Endangered Species: Don't Say Goodbye, which received the Strand Award for Best Documentary from the International Documentary Association.
Kenner has also directed a number of award-winning commercials and corporate videos for eBay, Hewlett Packard, Hallmark Cards, and others.
References
External links
Robert Kenner Films Web site
Living people
American Experience
American documentary filmmakers
American male screenwriters
American television directors
American television writers
Businesspeople from New Rochelle, New York
Film directors from New York (state)
Film producers from New York (state)
American male television writers
Screenwriters from New York (state)
Television producers from New York (state)
Writers from New Rochelle, New York
Year of birth missing (living people) |
3920710 | https://en.wikipedia.org/wiki/History%20of%20Microsoft%20Flight%20Simulator | History of Microsoft Flight Simulator | Microsoft Flight Simulator began as a set of articles on computer graphics, written by Bruce Artwick throughout 1976, about flight simulation using 3-D graphics. When the editor of the magazine told Artwick that subscribers were interested in purchasing such a program, Artwick founded Sublogic Corporation to commercialize his ideas. At first the new company sold flight simulators through mail order, but that changed in January 1979 with the release of Flight Simulator (FS) for the Apple II. They soon followed this up with versions for other systems and from there it evolved into a long-running series of computer flight simulators.
Sublogic flight simulators
First generation (Apple II and TRS-80)
− January 1979 for Apple II
− January 1980 for TRS-80
Second generation (Tandy Color Computer 3, Apple II, Commodore 64, and Atari 8-bit)
− December 1983 for Apple II
− June 1984 for Commodore 64
− October 1984 for Atari 8-bit family
− Sometime in 1987 for CoCo 3
Third generation (Amiga, Atari ST, and Macintosh)
− March 1986 for Apple Macintosh
− November 1986 for Amiga and Atari ST
In 1984 Amiga Corporation asked Artwick to port Flight Simulator for its forthcoming computer, but Commodore's purchase of Amiga temporarily ended the relationship. Sublogic instead finished a Macintosh version, released by Microsoft, then resumed work on the Amiga and Atari ST versions.
Although still called Flight Simulator II, the Amiga and Atari ST versions compare favorably with Microsoft Flight Simulator 3.0. Notable features included a windowing system allowing multiple simultaneous 3d views - including exterior views of the aircraft itself - and (on the Amiga and Atari ST) modem play.
Info gave the Amiga version five out of five, describing it as the "finest incarnation". Praising the "superb" graphics, the magazine advised to "BEGIN your game collection with this one!"
Microsoft Flight Simulator
Flight Simulator 1.0
− Released in November 1982
Flight Simulator 2.0
− Released in 1984
In 1984, Microsoft released their version 2 for IBM PCs. This version made small improvements to the original version, including the graphics and a more precise simulation in general. It added joystick and mouse input, as well as support for RGB monitors (4-color CGA graphics), the IBM PCjr, and (in later versions) Hercules graphics, and LCD displays for laptops. The new simulator expanded the scenery coverage to include a model of the entire United States, although the airports were limited to the same areas as in Flight Simulator 1.
Over the next year or two, compatibility with Sublogic Scenery Disks was provided, gradually covering the whole U.S. (including Hawaii), Japan, and part of Europe.
Flight Simulator 3.0
− Released in mid-1988
Microsoft Flight Simulator 3 improved the flight experience by adding additional aircraft and airports to the simulated area found in Flight Simulator 2, as well as improved high-res (EGA) graphics, and other features lifted from the Amiga/ST versions.
The three simulated aircraft were the Gates Learjet 25, Cessna Skylane, and Sopwith Camel. Flight Simulator 3 also allowed the user to customize the display; multiple windows, each displaying one of several views, could be positioned and sized on the screen. The supported views included the instrument and control panel, a map view, and various external camera angles.
This version included a program to convert the old series of Sublogic Scenery Disks into scenery files (known as SCN files), which could then be copied to the FS3 directory, allowing the user to expand the FS world.
Flight Simulator 4.0
− Released in late 1989
Version 4 followed in 1989, and brought several improvements over Flight Simulator 3. These included improved aircraft models, random weather patterns, a new sailplane, and dynamic scenery (non-interactive air and ground traffic on and near airports moving along static prerecorded paths). The basic version of FS4 was available for Macintosh computers in 1991. Like FS3, this version included an upgraded converter for the old Sublogic Scenery Disks into SCN files.
A large series of add-on products were produced for FS4 between 1989 and 1993. First from Microsoft & the Bruce Artwick Organization (BAO) came the Aircraft and Scenery Designer (ASD) integration module. This allowed FS4 users to build custom scenery units known as SC1 files which could be used within FS4 and traded with other users. Also, with the provided Aircraft Designer Module, the user could select one of two basic type aircraft frames (prop or jet) and customize flight envelope details and visual aspects. ASD provided additional aircraft including a Boeing 747 with a custom dash/cockpit (which required running in 640 × 350 resolution).
Mallard Software and BAO released the Sound, Graphics, and Aircraft Upgrade (SGA), which added digital and synth sound capability (on compatible hardware) to FS4. A variety of high resolution modes also became available for specific types of higher end video cards and chipsets, thus supplying running resolutions up to 800 × 600. As with ASD, the SGA upgrade also came with some additional aircraft designed by BAO, including an Ultra-light.
Another addition was known as the Aircraft Adventure Factory (AAF), which had two components. The first, the Aircraft Factory, was a Windows-based program allowing custom design aircraft shapes to be used within FS4 utilizing a CAD-type interface, supported by various sub menu and listing options. Once the shape was created and colors assigned to the various pieces, it could be tied to an existing saved flight model as was designed in the Aircraft Designer module. The other component of AAF was the Adventure module. Using a simple language, a user could design and compile a script that could access such things as aircraft position, airspeed, altitude, and aircraft flight characteristics.
Other add-on products (most published by Mallard Software) included: The Scenery Enhancement Edition (SEE4), which further enhanced SC1 files and allowed for AF objects to be used as static objects within SEE4; Pilots Power Tools (PPT), which greatly eased the management of the many aircraft and scenery files available; and finally, a variety of new primary scenery areas created by MicroScene, including Hawaii (MS-1), Tahiti (MS-2), Grand Canyon (MS-3), and Japan (MS-4). Scenery files produced by Sublogic could also be used with FS4, including Sublogic's final USA East and West scenery collections.
Flight Simulator 5.0
− Released in late 1993
Flight Simulator 5.0 is the first version of the series to use textures. This allowed FS5 to achieve a much higher degree of realism than the previous flat-shaded simulators. This also made all add-on scenery and aircraft for the previous versions obsolete, as they would look out of place.
The bundled scenery was expanded (now including parts of Europe). Improvements were made to the included aircraft models, the weather system's realism, and artificial intelligence. The coordinate system introduced in Flight Simulator 1 was revamped, and the scenery format was migrated from the old SCN/SC1 to the new and more complex BGL format.
More noticeable improvements included the use of digital audio for sound effects, custom cockpits for each aircraft (previous versions had one cockpit that was slightly modified to fit various aircraft), and better graphics.
It took about a year for add-on developers to get to grips with the new engine, but when they did they were not only able to release scenery, but also tools like Flightshop that made it feasible for users to design new objects.
Flight Simulator 5.1
− Released in 1995
In 1995, Flight Simulator 5.1 was introduced, adding the ability to handle scenery libraries including wide use of satellite imagery, faster performance, and a barrage of weather effects: storms, 3D clouds, and fog became true-to-life elements in the Flight Simulator world. This edition was also the first version that was released on CD-ROM and the last for DOS. This was released in June 1995.
In the fall of 1995, with the release of the Flight shop program, nearly any aircraft could be built. The French program "Airport" was also available for free which allowed users to build airports (FS5.1 only had 250 Worldwide) and other designers were doing custom aircraft cockpit panels. This all made for a huge amount of "freeware" to be released to be downloaded and added to the FS5.1 simulator. Forums such as CompuServe, Avsim, and Flightsim.com acted as libraries for uploads and discussion.
In November 1995, Microsoft acquired the Bruce Artwick Organization (BAO), Ltd from Bruce Artwick. Employees were moved to Redmond, WA, and development of Microsoft Flight Simulator continued.
Flight Simulator for Windows 95
− Released in mid-1996
With the release of Windows 95, a new version (6.0) was developed for that platform. Although this was essentially just a port from the DOS version (FS5.1), it did feature a vastly improved frame-rate, better haze, and additional aircraft, including the Extra 300 aerobatic aircraft.
Instead of using the version number in the title, Microsoft instead called it "Flight Simulator for Windows 95" to advertise the change in operating system. It is often abbreviated as "FS95" or "FSW95".
This was the first version released after the purchase of BAO by Microsoft, and after having physically relocated development of the BAO development staff to Microsoft's primary campus in Redmond, Washington. The BAO team was integrated with other non-BAO Microsoft staff, such as project management, testing, and artwork.
Additional scenery included major airports outside Europe and the US for the first time.
Flight Simulator 98
− Released on September 16, 1997
Flight Simulator 98 (version 6.1), abbreviated as FS98, is generally regarded as a "service release", offering minor improvements, with a few notable exceptions: The simulator now also featured a helicopter (the Bell 206B III JetRanger), as well as a generally improved interface for adding additional aircraft, sceneries, and sounds.
Other new "out of the box" aircraft included a revised Cessna 182 with a photorealistic instrument panel and updated flight model. The primary rationale for updating the 182 was Cessna's return to manufacturing that model in the late 1990s. The Learjet Model 45 business jet was also included, replacing the aging Learjet 35 from earlier versions. The Dynamic Scenery models were also vastly improved. One of the most noticeable improvements in this version was the ability to have independent panels and sounds for every aircraft.
A major expansion of the in-box scenery was also included in this release, including approximately 45 detailed cities (many located outside the United States, some of which had been included in separate scenery enhancement packs), as well as an increase in the modeled airports to over 3000 worldwide, compared with the approximately 300 in earlier versions. This major increase in scenery production was attributable partially to inclusion of the content from previous standalone scenery packs, as well as new contributions by MicroScene, a company in San Ramon, California who had developed several scenery expansions released by Microsoft.
This release also included support for the Microsoft Sidewinder Pro Force Feedback joystick, which allowed the player to receive some sensory input from simulated trim forces on the aircraft controls.
This was the first version to take advantage of 3D-graphic cards, through Microsoft's DirectX technology. With such combination of hardware and software, FS98 not only achieved better performance, but also implemented better haze/visibility effects, "virtual cockpit" views, texture filtering, and sunrise/sunset effects.
By November 1997, Flight Simulator 98 had shipped one million units, following its September launch. It received a "Gold" award from the Verband der Unterhaltungssoftware Deutschland (VUD) in August 1998, for sales of at least 100,000 units across Germany, Austria, and Switzerland. The VUD raised it to "Platinum" status, indicating 200,000 sales, by November.
Flight Simulator 2000
− Released in late 1999
Flight Simulator 2000 (version 7.0), abbreviated as FS2000, was released as a major improvement over the previous versions, and was also offered in two versions: One version for "normal" users, and one "pro" version with additional aircraft. Although many users had high expectations when this version arrived, many were disappointed when they found out that the simulator demanded high-end hardware; the minimum requirements were only a Pentium 166 MHz computer, although 400–500 MHz computer was deemed necessary to have an even framerate. However, even on a high-end system, stuttering framerate was a problem, especially when performing sharp turns in graphically dense areas. Also, the visual damage effects introduced in FS5 were disabled, and continued to be unavailable in versions after FS2000. While the visual damage effects were still in the game, Microsoft disabled them through the game's configuration files. Users can re-enable the damage effects through modifications. FS2000 also introduced computer controlled aircraft in some airports.
This version also introduced 3D elevation, making it possible to adjust the elevation for the scenery grids, thus making most of the previous scenery obsolete (as it didn't support this feature). A GPS was also added, enabling an even more realistic operation of the simulator. FS2000 also upgraded its dynamic scenery, with more detailed models and AI that allowed aircraft to yield to other aircraft to avoid incursions while taxiing.
FS2000 included an improved weather system, which featured precipitation for the first time in the form of either snow or rain, as well as other new features such as the ability to download real-world weather.
New aircraft in FS2000 included the supersonic Aerospatiale-BAC Concorde (prominently featured on both editions' box covers) and the Boeing 777 which had recently entered service at the time.
An often overlooked, but highly significant milestone in Flight Simulator 2000, was the addition of over 17,000 new airports, for a total exceeding 20,000 worldwide, as well as worldwide navigational aid coverage. This greatly expanded the utility of the product in simulating long international flights as well as instrument-based flight relying on radio navigation aids. Some of these airports, along with additional objects such as radio towers and other "hazard" structures, were built from publicly available U.S. government databases. Others, particularly the larger commercial airports with detailed apron and taxiway structures, were built from detailed information in Jeppesen's proprietary database, one of the primary commercial suppliers of worldwide aviation navigation data.
In combination, these new data sources in Flight Simulator allowed the franchise to claim the inclusion of virtually every documented airport and navigational aid in the world, as well as allowing implementation of the new GPS feature. As was the case with FS98, scenery development using these new data sources in FS2000 was outsourced to MicroScene in San Ramon, working with the core development team at Microsoft.
Microsoft Flight Simulator 2000 was the last of the Flight Simulator series to support the Windows 95 and Windows NT 4.0 operating systems.
Flight Simulator 2002
− Released in October 2001
Flight Simulator 2002 (version 8.0), abbreviated as FS2002, improved vastly over previous versions. In addition to improved graphics, FS2002 introduced air traffic control (ATC) and artificial intelligence (AI) aircraft enabling users to fly alongside computer-controlled aircraft and communicate with airports. An option for a target framerate was added, enabling a cap on the framerate to reduce stutter while performing texture loading and other maintenance tasks. In addition, the 3D Virtual Cockpit feature from FS98 was re-added in a vastly improved form, creating in effect a view of the cockpit from the viewpoint of a real pilot. The external view also featured an inertia effect, inducing an illusion of movement in a realistic physical environment. The simulation runs smoother than Flight Simulator 2000, even on comparable hardware. A free copy of Fighter Ace 2 was also included with the software.
Flight Simulator 2004: A Century of Flight
− Released in July 2003
Flight Simulator 2004: A Century of Flight (version 9.0), also known as FS9 or FS2004, was shipped with several historical aircraft such as the Wright Flyer, Ford Tri-Motor, and the Douglas DC-3 to commemorate the 100th anniversary of the Wright Brothers' first flight. The program included an improved weather engine that provided true three-dimensional clouds and true localized weather conditions for the first time. The engine also allowed users to download weather information from actual weather stations, allowing the simulator to synchronize the weather with the real world. Other enhancements from the previous version included better ATC communications, GPS equipment, interactive virtual cockpits, and more variety in autogen such as barns, street lights, silos, etc.
Flight Simulator 2004 is also the last version to include and feature Meigs Field as its default airport. The airport was closed on March 30, 2003, and the airport was removed in the subsequent releases. It is also the last version to support Windows 98/9x series of operating systems.
Flight Simulator X
− Released in October 2006
Flight Simulator X (version 10.0), abbreviated as FSX, is the tenth edition in the Flight Simulator franchise. It features new aircraft, improved multiplayer support, including the ability for two players to fly a single plane, and players to occupy a control tower available in the Deluxe Edition, and improved scenery with higher resolution ground textures.
FSX includes fewer aircraft than FS2004, but incorporates new aircraft such as the Airbus A321, Maule Orion, Boeing 737-800 (replacing the aging Boeing 737-400), Beechcraft King Air and Bombardier CRJ700. The expansion pack, named Acceleration, was released later, which includes new missions, aircraft, and other updates. The Deluxe edition of Flight Simulator X includes the Software Development Kit (SDK), which contains an object placer, allowing the game's autogen and full scenery library to be used in missions or add-on scenery. Finally, the ability to operate the control surfaces of aircraft with the mouse was reintroduced after it was removed in FS2002.
Previous versions did not allow great circle navigation at latitudes higher than 60 degrees (north or south), and at around 75-80 degrees north–south it became impossible to "fly" closer to the poles, whichever compass heading was followed. This problem is solved in FSX. Users may now navigate through any great circle as well as "fly" across both the Arctic and Antarctic. This version also adds the option to have a transparent panel.
FSX is the first of the series to be released exclusively on DVD-ROM due to space constraints. This is also the first in the series that calls for the preparing process known as activating. Through the internet or a phone a hardware number is generated, and a corresponding code is then used to lock the DVD to one single computer only. It also requires a significantly more powerful computer to run smoothly, even on low graphical settings. Users have reported that the game is "CPU-bound" - a powerful processor is generally more helpful in increasing performance than a powerful graphics card.
Meigs Field in Chicago was removed following its sudden destruction in 2003, while Kai Tak Airport in Hong Kong, which had closed in 1998, remained.
FSX is the last version of Microsoft Flight Simulator to support Windows XP, Vista, 7, 8, and 8.1 as Microsoft Flight Simulator (2020) only works on Windows 10.
Flight Simulator X: Acceleration
− Released in October 2007
Microsoft released their first expansion pack for Flight Simulator in years, called Flight Simulator X: Acceleration, to the US market on October 23, 2007, and released to the Australian market on November 1, 2007. Unlike the base game, which is rated E, Acceleration is rated E10+ in the US.
Acceleration introduces new features, including multiplayer air racing, new missions, and three all-new aircraft, the F/A-18A Hornet, EH-101 helicopter, and P-51D Mustang. In many product reviews, users complained of multiple bugs in the initial release of the pack. One of the bugs, that occurs only in the Standard Edition, is the Maule Air Orion aircraft used in the mission has missing gauges and other problems, as it is a Deluxe Version-only aircraft.
The new scenery enhancements cover Berlin, Istanbul, Cape Canaveral, and Edwards Air Force Base, providing high accuracy both in the underlying photo texture (60 cm/pixel) and in the detail given to the 3D objects.
Flight Simulator X: Acceleration can take advantage of Windows Vista, Windows 7, and DirectX 10 as well.
The expansion pack includes code from both service packs, thus installing them is unnecessary.
Flight Simulator X: Steam Edition (Dovetail Games)
− Released in December 2014
On 9 July 2014, Dovetail Games announced a licensing agreement with Microsoft to distribute Microsoft Flight Simulator X: Steam Edition and to develop further products based on Microsoft's technology for the entertainment market.
Dovetail released Microsoft Flight Simulator X: Steam Edition on 18 December 2014. It is a re-release of Flight Simulator X: Gold Edition, which includes the Deluxe and Acceleration packs and both Service Packs. It includes "all standard Steam functionality", and replaces the GameSpy multiplayer system with Steam's multiplayer system.
While FSX: Steam Edition remains on sale, Dovetail also released a new flight simulation franchise, Flight Sim World. The company originally planned to bring this game to market in 2015. However, the program became available in 2017. In April 2018, Flight Sim World development was closed, and sales ended in May 2018.
Flight Simulator (2020)
− Released in August 2020
The latest entry to the series was first revealed in June 2019, at Microsoft's E3 2019 conference. Soon after the announcement, Microsoft Studios made available to the public its Microsoft Flight Simulator Insider Program webpage, where participants could subscribe to news, offer feedback, access a private forum, and be eligible to participate in Alpha and Beta releases of the game.
Flight Simulator (2020) features significantly more scenery detail, accurately modelling virtually every part of the world. The simulation also includes vastly more sophisticated aircraft, with nearly complete simulations of aircraft systems, overhead panels and flight management computers (FMCs) in commercial jet airliners; features which were highly incomplete in previous versions.
The new Flight Simulator is powered by satellite data and Azure AI. It features high fidelity shadow generation and reflections on aircraft surfaces, busy airports with animated vehicles and people, complex cloud formations, defined shorelines and water bodies, realistic precipitation effects on the aircraft's windshield, and very detailed terrain generation with a vast amount of autogenerated scenery.
The official website for the game states: "Microsoft Flight Simulator is the next generation of one of the most beloved simulation franchises. From light planes to wide-body jets, fly highly detailed and stunning aircraft in an incredibly realistic world. Create your flight plan and fly anywhere on the planet. Enjoy flying day or night and face realistic, challenging weather conditions."
The game was released for Windows on August 18, 2020, through Xbox Game Pass and Steam on PC. Microsoft Flight Simulator 2020 was released on Xbox Series S/X on July 27, 2021.
Microsoft Flight
− Released in February 2012
On August 17, 2010, Microsoft announced a new flight simulator, Microsoft Flight, designed to replace the Microsoft Flight Simulator series. New to Flight is Games for Windows – Live integration, replacing the GameSpy client which was used in previous installments.
An add-on market place was implemented as well, offering some additional scenery packs and aircraft as downloadable content (DLC). The new version was aimed at current flight simulator fans, as well as novice players. However, Flight has a different internal architecture and operational philosophy, and is not compatible with the previous Flight Simulator series.
Some users and critics such as Flying Magazine were disappointed with the product, the main issue being that the product is a game, rather than a simulator, to attract a casual audience rather than enthusiasts who would want a more realistic experience.
On July 25, 2012, Microsoft announced it had cancelled further development of Microsoft Flight, stating that this was part of "the natural ebb and flow" of application management. The company stated it will continue to support the community and offer Flight as a free download, but closed down all further development of the product on 26 July 2012.
Products based on the Flight Simulator X codebase
Lockheed Martin Prepar3D
In 2009, Lockheed Martin announced that they had negotiated with Microsoft to purchase the intellectual property and including source code for Microsoft ESP which was the commercial-use version of Flight Simulator X SP2. In 2010 Lockheed announced that the new product based upon the ESP source code would be called Prepar3D. Lockheed has hired members of the original ACES Studios team to continue development of the product. Most Flight Simulator X addons as well as the default FSX aircraft work in Prepar3D without any adjustment since Prepar3D is kept backward compatible. The first version was released on 1 November 2010.
Dovetail Games Flight Sim World
In May 2017, Dovetail Games announced Flight Sim World, based on the codebase of Flight Simulator X, and released later that month. Only a year later, on April 23, 2018, Dovetail announced end of development of Flight Sim World and the end of sales effective May 15, 2018.
Reception
In 1989, Video Games & Computer Entertainment reported that Flight Simulator was "unquestionably the most popular computer game in the world, with nearly two million copies sold."
References
External links
"Flight Simulator History" - Detailed history of early versions of Flight Simulator
"Microsoft Flight Simulator History Web" - Evolution of Microsoft Flight Simulator
"FS4 Webport" - Extensive information and support for Microsoft Flight Simulator 4
"Lockheed Martin Prepar3D" - Prepar3D home page
Microsoft games
Flight Simulator
Flight Simulator
Microsoft Flight Simulator
Video games developed in the United States
Microsoft Flight Simulator |
8031439 | https://en.wikipedia.org/wiki/Radeon%20HD%204000%20series | Radeon HD 4000 series | The Radeon R700 is the engineering codename for a graphics processing unit series developed by Advanced Micro Devices under the ATI brand name. The foundation chip, codenamed RV770, was announced and demonstrated on June 16, 2008 as part of the FireStream 9250 and Cinema 2.0 initiative launch media event, with official release of the Radeon HD 4800 series on June 25, 2008. Other variants include enthusiast-oriented RV790, mainstream product RV730, RV740 and entry-level RV710.
Its direct competition was nVidia's GeForce 200 series, which launched in the same month.
Architecture
This article is about all products under the brand "Radeon HD 4000 Series". All products implement TeraScale 1 microarchitecture.
Execution units
The RV770 extends the R600's unified shader architecture by increasing the stream processing unit count to 800 units (up from 320 units in the R600), which are grouped into 10 SIMD cores composed of 16 shader cores containing 4 FP MADD/DP ALUs and 1 MADD/transcendental ALU. The RV770 retains the R600's 4 Quad ROP cluster count, however, they are faster and now have dedicated hardware-based AA resolve in addition to the shader-based resolve of the R600 architecture. The RV770 also has 10 texture units, each of which can handle 4 addresses, 16 FP32 samples, and 4 FP32 filtering functions per clock cycle.
Memory and internal buses
RV770 features a 256-bit memory controller and is the first GPU to support GDDR5 memory, which runs at 900 MHz giving an effective transfer rate of 3.6 GHz and memory bandwidth of up to 115 GB/s. The internal ring bus from the R520 and R600 has been replaced by the combination of a crossbar and an internal hub.
Video acceleration
The SIP block UVD 2.0-2.2 implemented on the dies of all Radeon HD 4000 Series Desktop gpus, 48xx series is using uvd 2.0, 47xx-46xx-45xx-43xx series is using uvd 2.2.
Support is available for Microsoft Windows at release, for Linux with Catalyst 8.10. The free and open-source driver requires Linux kernel 3.10 in combination with Mesa 9.1 (exposed via the widely adopted VDPAU)), offering full hardware MPEG-2, H.264/MPEG-4 AVC and VC-1 decoding and the support for dual video streams, the Advanced Video Processor (AVP) also saw an upgrade with DVD upscaling capability and dynamic contrast feature. The RV770 series GPU also supports xvYCC color space output and 7.1 surround sound output (LPCM, AC3, DTS) over HDMI. The RV770 GPU also supports an Accelerated Video Transcoding (AVT) feature, which has video transcoding functions being assisted by the GPU, through stream processing.
GPU interconnect enhancements
This generation of dual-GPU design retains the use of a PCI Express bridge, PLX PEX 8647 with a power dissipation of 3.8 watts inclusive of PCI Express 2.0 support, allowing two GPUs on the same PCI Express slot with doubled bandwidth over the past generation of product (Radeon HD 3870 X2). Subsequent generations of dual-GPU design also feature an interconnect for inter-GPU communications through the implementation of a CrossFire X SidePort on each GPU, giving extra 5 GB/s full-duplex inter-GPU bandwidth. These two features increase total bandwidth for dual-GPU designs to 21.8 GB/s.
OpenCL (API)
OpenCL accelerates many scientific Software Packages against CPU up to factor 10 or 100 and more.
Open CL 1.0 to 1.1 are supported for all Chips with RV7xx.
Desktop products
Radeon HD 4800
The Radeon HD 4850 was announced on June 19, 2008 while the Radeon HD 4870 was announced on June 25, 2008. They are both based on the RV770 GPU, packing 956 million transistors and being produced on a 55 nm process. The Radeon HD 4850 currently uses GDDR3 memory, while the Radeon HD 4870 uses GDDR5 memory.
Another variant, the Radeon HD 4830 was updated on October 23, 2008, featuring the RV770 LE GPU with a 256-bit GDDR3 memory interface, and 640 shader processors. Basically the RV770 LE is a RV770 with some functional units disabled.
Dual GPU products using two RV770 GPUs, codenamed R700, were also announced. One product named Radeon HD 4870 X2, featuring 2×1GB GDDR5 memory, was released on August 12, 2008, while another dual-GPU product, the Radeon HD 4850 X2, with GDDR3 memory and lower clock speeds, is also available.
A minor update was introduced on April 2, 2009 with the launch of Radeon HD 4890 graphics cards based on the RV790 GPU. Featuring an improved design with decoupling capacitors to reduce signal noise, altered ASIC power distribution and re-timed the whole GPU chip, which resulted in a slight increase in die size but overall much better stability at high clock rates and a higher default clock. On August 18, 2009, AMD released a stripped down variant of the RV790 GPU called the RV790GT that is used by the Radeon HD 4860 which is now available in all markets.
Radeon HD 4700
The Radeon HD 4700 series was announced on April 28, 2009. The Radeon HD 4770, is based on the RV740 GPU, packs 826 million transistors and being produced on the latest 40 nm process. The Radeon HD 4730 was introduced June 8, 2009, unlike the RV740 based Radeon HD 4770, the 4730 is a stripped down 55 nm RV770 GPU, named the RV770CE. The 4730 packs 956 million transistors, and uses GDDR5 memory on a 128-bit bus. On September 9, 2009, the RV740PRO based Radeon HD 4750 was released exclusively to the Chinese market. The Radeon HD 4750 is based on the 40 nm RV740 of the Radeon HD 4770 but features a lower clock speed and the absence of a six-pin auxiliary power input.
Radeon HD 4600
The Radeon HD 4600 series was announced on September 10, 2008. All variants are based on the RV730 GPU, packing 514 million transistors and being produced on a 55 nm process. The PCIe version 4600 series products do not require external power connectors. More recently, an AGP version of the 4670 has been released. This does require an external power connector. As of March 2018, this elusive AGP card remains among the last cards using the aging bus.
Radeon HD 4300/HD 4500
The Radeon HD 4350 and Radeon HD 4550 were announced on September 30, 2008, both based on the RV710 GPU, packing 242 million transistors and being produced on a 55 nm process. Both products use either GDDR3, DDR3 or DDR2 video memory. AMD states these two products have maximum of 20 W and 25 W of power consumption under full load, respectively.
Chipset Table
Desktop Products
1 Unified shaders : Texture mapping units : Render output units
2 The effective data transfer rate of GDDR5 is quadruple its nominal clock, instead of double as it is with other DDR memory.
3 The TDP is reference design TDP values from AMD. Different non-reference board designs from vendors may lead to slight variations in actual TDP.
4 All models feature UVD2 & PowerPlay.
IGP (HD 4000)
All Radeon HD 4000 IGP models include Direct3D 10.1 and OpenGL 2.0
1 Unified shaders : Texture mapping units : Render output units
2 The clock frequencies may vary in different usage scenarios, as ATI PowerPlay technology is implemented. The clock frequencies listed here refer to the officially announced clock specifications.
3 The sideport is a dedicated memory bus. It preferably used for frame buffer.
Radeon Feature Matrix
Mobile products
Graphics device drivers
AMD's proprietary graphics device driver "Catalyst"
AMD Catalyst is being developed for Microsoft Windows and Linux. As of July 2014, other operating systems are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers.
AMD Catalyst supports all features advertised for the Radeon brand.
The Radeon HD 4000 series has been transitioned to legacy support, where drivers will be updated only to fix bugs instead of being optimized for new applications.
Free and open-source graphics device driver "Radeon"
The free and open-source drivers are primarily developed on Linux and for Linux, but have been ported to other operating systems as well. Each driver is composed out of five parts:
Linux kernel component DRM
Linux kernel component KMS driver: basically the device driver for the display controller
user-space component libDRM
user-space component in Mesa 3D
a special and distinct 2D graphics device driver for X.Org Server, which is finally about to be replaced by Glamor
The free and open-source "Radeon" graphics driver supports most of the features implemented into the Radeon line of GPUs.
The free and open-source "Radeon" graphics device drivers are not reverse engineered, but based on documentation released by AMD.
See also
AMD FirePro
AMD FireMV
AMD FireStream
List of AMD graphics processing units
References
External links
ATI Radeon HD 4000 Series: Desktop, Mobile
techPowerUp! GPU Database
Graphics cards
Advanced Micro Devices graphics cards ATI brand |
48776405 | https://en.wikipedia.org/wiki/Supachai%20Tangwongsan | Supachai Tangwongsan | Supachai Tangwongsan (, born 18 December 1947) is a Thai emeritus professor of computer science at Mahidol University, Thailand. He received his Doctor of Philosophy degree in electrical engineering at Purdue University with the Royal support of King Ananda Mahidol Foundation scholarship.
In working experience, Dr. Tangwongsan recently was Chairman of the Executive Board of National Software Industry Promotion Agency of Thailand in 2010-2014. He hold the title of University Vice-President for Academic Infrastructures and Chief Information Officer (CIO) from 1998 to 2007. He was assigned to pioneer an establishment of the Computing Center, Mahidol University and he held the title of Director from 1986 to 1999. For academic affairs, in 1989, he was the founder of Department of Computer Science and served as a Chairman in 1989-1997. In 2003, he also established Faculty of ICT, Mahidol University.
His best known work is Buddhist Scripture Information Retrieval (BUDSIR), the first computerized Buddhist Scripture of the world. BUDSIR had been continuously developed with computerized transliteration in Thai and other eight scripts. BUDSIR was granted an outstanding award for MU innovative work, by the National Research Council of Thailand in 1989, as well as "Distinguished Service Award" by the UC Berkeley in 1993. Another achievement in 2003 earned him "The ICT Innovation Award 2003", a national award for winning first prize for Mahidol University’s Intra-Phone, presented by the National Information Communication Technology Committee.
Education
Ph.D. in electrical engineering at Purdue University, USA in 1976.
M.S. in electrical engineering at Purdue University, USA in 1972.
B.Eng. in communication electrical engineering (First Class Honors) at Chulalongkorn University, Thailand in 1970.
Working experience
1980 - Founder of Computing Center of Mahidol University.
1980-1999 - Director of Computing Center, Mahidol University.
1989 - Founder of Department of Computer Science, Faculty of Science, Mahidol University.
1989-1997 - Chairman, Department of Computer Science, Mahidol University.
1990 - Associate professor in computer science at Mahidol University.
1998-2007 - Chief information officer (CIO) of Mahidol University.
1999-2007 - Vice president for academic infrastructures and facilitation of Mahidol University.
2003 - Founder of Faculty of Information and Communications Technology (ICT), Mahidol University.
2004 - Acting director of Library Center, Mahidol University.
2009–present - Senior advisor of Faculty of ICT, Mahidol University.
2010-2014 - Chairman of the executive board of National Software Industry Promotion Agency of Thailand
2012 - Professor of computer science at Mahidol University.
2013–present - Emeritus professor of computer science at Mahidol University.
Awards and honors
1970 - King Bhumipol Scholarship and the gold medal award, Faculty of Engineering, Chulalongkorn University.
1971-1977 - Anandamahidol Foundation Scholarship for pursue graduate degrees.
1973-1976 - Research grants from National Science Foundation (NSF), USA.
1989 - Outstanding award of Mahidol University for the pioneer, Project : "Buddhist Scriptures Computerization".
1989 - Premier Invention Award from National Research Council of Thailand for the pioneer project : "Buddhist Scriptures Computerization"
1993 - Distinguished Service Award from the UC Berkeley, USA for the project "Electronic Pali Canon".
1997 - Outstanding Contribution to Buddhism Award from Department of Religious Affairs, Ministry of Education.
2003 - The ICT Innovation Award 2003 from the National Information Communication Technology Committee for Mahidol University’s Intra-Phone.
2010 - Mahidol's Award for the Distinguished Book titled "Information Storage and Retrieval Systems".
Books
Supachai Tangwongsan, "Information Storage and Retrieval Systems", 3rd Edition, Bangkok Thailand, 2015.
Supachai Tangwongsan, "Managing ICT Projects", Bangkok Thailand, 2015.
References
Supachai Tangwongsan
Supachai Tangwongsan
Purdue University College of Engineering alumni
Supachai Tangwongsan
Buddhist literature
Living people
1947 births |
992421 | https://en.wikipedia.org/wiki/Cisco%20PIX | Cisco PIX | Cisco PIX (Private Internet eXchange) was a popular IP firewall and network address translation (NAT) appliance. It was one of the first products in this market segment.
In 2005, Cisco introduced the newer Cisco Adaptive Security Appliance (Cisco ASA), that inherited many of the PIX features, and in 2008 announced PIX end-of-sale.
The PIX technology was sold in a blade, the FireWall Services Module (FWSM), for the Cisco Catalyst 6500 switch series and the 7600 Router series, but has reached end of support status as of September 26, 2007.
PIX
History
PIX was originally conceived in early 1994 by John Mayes of Redwood City, California and designed and coded by Brantley Coile of Athens, Georgia. The PIX name is derived from its creators' aim of creating the functional equivalent of an IP PBX to solve the then-emerging registered IP address shortage. At a time when NAT was just being investigated as a viable approach, they wanted to conceal a block or blocks of IP addresses behind a single or multiple registered IP addresses, much as PBXs do for internal phone extensions. When they began, RFC 1597 and RFC 1631 were being discussed, but the now-familiar RFC 1918 had not yet been submitted.
The design, and testing were carried out in 1994 by John Mayes, Brantley Coile and Johnson Wu of Network Translation, Inc., with Brantley Coile being the sole software developer. Beta testing of PIX serial number 000000 was completed and first customer acceptance was on December 21, 1994 at KLA Instruments in San Jose, California. The PIX quickly became one of the leading enterprise firewall products and was awarded the Data Communications Magazine "Hot Product of the Year" award in January 1995.
Shortly before Cisco acquired Network Translation in November 1995, Mayes and Coile hired two longtime associates, Richard (Chip) Howes and Pete Tenereillo, and shortly after acquisition 2 more longtime associates, Jim Jordan and Tom Bohannon. Together they continued development on Finesse OS and the original version of the Cisco PIX Firewall, now known as the PIX "Classic". During this time, the PIX shared most of its code with another Cisco product, the LocalDirector.
On January 28, 2008, Cisco announced the end-of-sale and end-of-life dates for all Cisco PIX Security Appliances, software, accessories, and licenses. The last day for purchasing Cisco PIX Security Appliance platforms and bundles was July 28, 2008. The last day to purchase accessories and licenses was January 27, 2009. Cisco ended support for Cisco PIX Security Appliance customers on July 29, 2013.
In May 2005, Cisco introduced the ASA which combines functionality from the PIX, VPN 3000 series and IPS product lines. The ASA series of devices run PIX code 7.0 and later. Through PIX OS release 7.x the PIX and the ASA use the same software images. Beginning with PIX OS version 8.x, the operating system code diverges, with the ASA using a Linux kernel and PIX continuing to use the traditional Finesse/PIX OS combination.
Software
The PIX runs a custom-written proprietary operating system originally called Finese (Fast Internet Service Executive), but the software is known simply as PIX OS. Though classified as a network-layer firewall with stateful inspection, technically the PIX would more precisely be called a Layer 4, or Transport Layer Firewall, as its access is not restricted to Network Layer routing, but socket-based connections (a port and an IP Address: port communications occur at Layer 4). By default it allows internal connections out (outbound traffic), and only allows inbound traffic that is a response to a valid request or is allowed by an Access Control List (ACL) or by a conduit. Administrators can configure the PIX to perform many functions including network address translation (NAT) and port address translation (PAT), as well as serving as a virtual private network (VPN) endpoint appliance.
The PIX became the first commercially available firewall product to introduce protocol specific filtering with the introduction of the "fixup" command. The PIX "fixup" capability allows the firewall to apply additional security policies to connections identified as using specific protocols. Protocols for which specific fixup behaviors were developed include DNS and SMTP. The DNS fixup originally implemented a very simple but effective security policy; it allowed just one DNS response from a DNS server on the Internet (known as outside interface) for each DNS request from a client on the protected (known as inside) interface. "Inspect" has superseded "fixup" in later versions of PIX OS.
The Cisco PIX was also one of the first commercially available security appliances to incorporate IPSec VPN gateway functionality.
Administrators can manage the PIX via a command line interface (CLI) or via a graphical user interface (GUI). They can access the CLI from the serial console, telnet and SSH. GUI administration originated with version 4.1, and it has been through several incarnations:
PIX Firewall Manager (PFM) for PIX OS versions 4.x and 5.x, which runs locally on a Windows NT client
PIX Device Manager (PDM) for PIX OS version 6.x, which runs over https and requires Java
Adaptive Security Device Manager (ASDM) for PIX OS version 7 and greater, which can run locally on a client or in reduced-functionality mode over HTTPS.
Because Cisco acquired the PIX from Network Translation, the CLI originally did not align with the Cisco IOS syntax. Starting with version 7.0, the configuration became much more IOS-like.
Hardware
The original NTI PIX and the PIX Classic had cases that were sourced from OEM provider Appro. All flash cards and the early encryption acceleration cards, the PIX-PL and PIX-PL2, were sourced from Productivity Enhancement Products (PEP). Later models had cases from Cisco OEM manufacturers.
The PIX was constructed using Intel-based/Intel-compatible motherboards; the PIX 501 used an Am5x86 processor, and all other standalone models used Intel 80486 through Pentium III processors.
The PIX boots off a proprietary ISA flash memory daughtercard in the case of the NTI PIX, PIX Classic, 10000, 510, 520, and 535, and it boots off integrated flash memory in the case of the PIX 501, 506/506e, 515/515e, 525, and WS-SVC-FWM-1-K9. The latter is the part code for the PIX technology implemented in the Fire Wall Services Module, for the Catalyst 6500 and the 7600 Router.
Adaptive Security Appliance (ASA)
The Adaptive Security Appliance is a network firewall made by Cisco. It was introduced in 2005 to replace the Cisco PIX line. Along with stateful firewall functionality another focus of the ASA is Virtual Private Network (VPN) functionality. It also features Intrusion Prevention and Voice over IP. The ASA 5500 series was followed up by the 5500-X series. The 5500-X series focuses more on virtualization than it does on hardware acceleration security modules.
History
In 2005 Cisco released the 5510, 5520, and 5540 models.
Software
The ASA continues using the PIX codebase but, when the ASA OS software transitioned from major version 7.X to 8.X, it moved from the Finesse/Pix OS operating system platform to the Linux operating system platform. It also integrates features of the Cisco IPS 4200 Intrusion prevention system, and the Cisco VPN 3000 Concentrator.
Hardware
The ASA continues the PIX lineage of Intel 80x86 hardware.
Security vulnerabilities
The Cisco PIX VPN product was hacked by the NSA-tied group Equation Group somewhere before 2016. Equation Group developed a tool code-named BENIGNCERTAIN that reveals the pre-shared password(s) to the attacker (). Equation Group was later hacked by another group called The Shadow Brokers, which published their exploit publicly, among others. According to Ars Technica, the NSA likely used this vulnerability to wiretap VPN-connections for more than a decade, citing the Snowden leaks.
The Cisco ASA-brand was also hacked by Equation Group. The vulnerability requires that both SSH and SNMP are accessible to the attacker. The codename given to this exploit by NSA was EXTRABACON. The bug and exploit () was also leaked by The ShadowBrokers, in the same batch of exploits and backdoors. According to Ars Technica, the exploit can easily be made to work against more modern versions of Cisco ASA than what the leaked exploit can handle.
On the 29th of January 2018 a security problem at the Cisco ASA-brand was disclosed by Cedric Halbronn from the NCC Group. A use after free-bug in the Secure Sockets Layer (SSL) VPN functionality of the Cisco Adaptive Security Appliance (ASA) Software could allow an unauthenticated remote attacker to cause a reload of the affected system or to remotely execute code. The bug is listed as .
See also
Cisco LocalDirector
References
Pix
Computer network security
Server appliance |
31443446 | https://en.wikipedia.org/wiki/Instituto%20Superior%20Santo%20Domingo | Instituto Superior Santo Domingo | Instituto Superior Santo Domingo (ISSD) is a 3-year private-technical institute located in Córdoba, Argentina. ISSD was founded in 1986 in the city of Córdoba as a private third level education institute.
History
The institution had its birth in 1986, with the aim of offering education in the area of Computing-Computer Science and Telecommunication. The institution began its (unofficial) educational activities on August 11, 1986 with the name of CEPRICyC (Private Training Center and Computing).
In 1992 investigations were conducted to process the membership of formal education and under the supervision of official agency DIPE (Department of Private Colleges of the Province of Córdoba), under the Ministry of Education of the Province of Cordoba, the institute met all the requirements needed to become an official college, approving the respective inspections and finally began its (official) education activities in 1996.
Academics
ISSD 3-year academic program are dedicated exclusively in the studies of specific areas of computing-computer science, business and telecommunications. Their academic offers are:
System Analyst (computing-computer science)
Webmaster (computing-computer science)
Telecommunication
Business Management
References
Educational institutions established in 1986
1986 establishments in Argentina
Education in Argentina |
47870090 | https://en.wikipedia.org/wiki/Capital%20University%20of%20Science%20%26%20Technology | Capital University of Science & Technology | The Capital University of Science & Technology () is a private university located in Islamabad, Pakistan. Established in 1998 under the banner of Muhammad Ali Jinnah University Islamabad Campus, the university offers undergraduate and post-graduate programs with a strong emphasis on business management, applied sciences, engineering and computer science.
History
The Punjab Group of Colleges has been serving the community with education since 1985. Punjab College of Commerce was the first institution to be established by the Group at Lahore. The Group is extending its network to several cities in the country.
Under the umbrella of the Punjab Group of Colleges, Punjab College of Business Administration and Punjab Institution of Computer Science have emerged as business and computer science institutions. Punjab Law College and Punjab College of Information Technology are also links in this chain of colleges.
As a tribute to the Father of the Nation, the group named its next ambitious project as Mohammad Ali Jinnah University. The university was granted its charter by the government of Sindh in 1998. The Islamabad campus was established after obtaining an NOC from UGC, dated 17 August 1998 and dated 29 November 2001 and NOC from HEC dated 27 September 2003.
In recognition of its services to education, the group has been awarded another charter by the government of Punjab to establish the University of Central Punjab.
Resource Academia was established in Islamabad after the success of the school in Lahore in 2003. The institution will provide education from preschool to grade VIII at the junior level and from O level to A level at the senior level. The aim of the group is to establish a nationwide network of schools.
The group has established institutions providing education from the pre-school level to the Ph.D. level. The campuses are located in the major cities of Pakistan.
Academics
The university has departments of Computer Science, Bioinformatics and Biosciences, Mathematics, Pharmacy, Software Engineering in the Faculty of Computing and Civil Engineering, Mechanical Engineering, Electrical Engineering in the Faculty of Engineering; in the Faculty of Business Administration and Social Sciences, there are departments of Business Administration, Economics and Social Sciences.
In addition to pure academic programs, the university runs training programs, seminars and workshops. The university has started doctoral programs in Computer Sciences, Mathematics, Bioinformatics, Civil Engineering, Mechanical Engineering, Electrical Engineering and Management Sciences.
Research groups and labs
Center of Research in Networks and Telecommunications
Center of Research in Networks and Telecommunications fosters research and development activity in the rapidly growing field of networks and telecommunications.
Control and Signal Processing Research Group
Control and Signal Processing Research Group.
Vision and Pattern Recognition Research Group
Vision and Pattern Recognition Group is involved in basic and applied research in the fields of Image Processing, Machine/Computer Vision and Pattern Recognition/Classification.
ACME Center for Research in Wireless Communications
The mission of the ACME Center for Research in Wireless Communication is to conduct research in networking (core, wireless, sensors), mobile and pervasive computing, distributed and grid computing, cellular networks and social networks.
Center for Software Dependability
Center for Software Dependability is a research group founded in June 2003. The group has been working in diverse areas within the domain of Software Dependability, with focus on Software Reliability, Formal Methods, Model Driven Architecture and Software Testing.
Engineering Societies
American Society of Mechanical Engineers
Institute of Electrical and Electronics Engineers
American Society of Heating, Refrigerating and Air Conditioning Engineers
Engineers Voice
Hope-CUST Welfare Society
Engineer's Forum
Jinnah Engineering Society
Energy and Environment
See also
Punjab Group of Colleges
Punjab College of Business Administration
Punjab Law College
Mohammad Ali Jinnah University, Karachi
University of Central Punjab, Lahore
References
External links
CUST official website
Universities and colleges in Islamabad
Private universities and colleges in Pakistan
Engineering universities and colleges in Pakistan
1998 establishments in Pakistan
Educational institutions established in 1998 |
235696 | https://en.wikipedia.org/wiki/Softmodem | Softmodem | A software modem, commonly referred to as a softmodem, is a modem with minimal hardware that uses software running on the host computer, and the computer's resources (especially the central processing unit, random access memory, and sometimes audio processing), in place of the hardware in a conventional modem.
Softmodems are also sometimes called winmodems due to limited support for platforms other than Windows. By analogy, a linmodem is a softmodem that can run on Linux.
Softmodems are sometimes used as an example of a hard real-time system. The audio signals to be transmitted must be computed on a tight interval (on the order of every 5 or 10 milliseconds); they cannot be computed in advance, and they cannot be late or the receiving modem will lose synchronization.
History
The first generations of hardware modems (including acoustic couplers) and their protocols used relatively simple modulation techniques such as FSK or ASK at low speeds. Under these conditions, modems could be built with the analog discrete component technology used during the late 70s and early 80s.
As more sophisticated transmission schemes were devised, the circuits grew in complexity substantially. New modulation required mixing analog and digital components, and eventually incorporating multiple integrated circuits (ICs) such as logical gates, PLLs and microcontrollers. Later techniques used in modern V.34, V.90 and V.92 protocols (such as a 1664-point QAM constellation) are so complex that implementing them with discrete components or general purpose ICs became impractical.
Furthermore, improved compression and error correction schemes were introduced in the newest protocols, requiring extra processing power in the modem itself. This made the construction of a mainly analog/discrete component modem impossible. Finally, compatibility with older protocols using completely different modulation schemes would have required a modem made with discrete electronics to contain multiple complete implementations.
Initially the solution was to use LSI ASICs which shrank the various implementations into a small number of components, but since standards continued to change, there was a desire to create modems that could be upgraded.
In 1993, Digicom marketed the Connection 96 Plus, a modem based around a DSP which was programmed by an application on startup. Because the program was replaceable, the modem could be upgraded as standards improved. Digicom branded this technology SoftModem, perhaps originating the term.
Likewise, the term "Winmodem" may have originated with USRobotics' Sportster Winmodem, a similarly upgradable DSP-based design.
In 1996, two types of modem began to reach the market: host-based modems, which offloaded some work onto the host CPU, and software-only modems which transferred all work onto the host system's CPU. In 1997, the AC'97 standard for computer audio would introduce channels for modem use, making software modem technology common in PCs.
Since then, some softmodems have been created as standalone software projects utilizing standard sound card interfaces, such as an experimental open-source 96 kbit/s leased-line softmodem called AuDSL from 1999, and the Minimodem project which implements several FSK modem standards.
Types
Softmodems can be separated into two classes: controllerless modems and pure software modems.
Controllerless modems utilize a DSP on the modem itself to perform modulation, demodulation and other tasks. Some, known as "host-based" modems, may still use some amount of the host's CPU power for some tasks.
Pure software modems perform all modem tasks on the host PC's CPU, while the hardware provides only analog-digital conversion and connection to the telephone network.
Advantages and disadvantages
The original stated purpose of the DSP-based softmodem was to provide for upgradeability, a concern in an era when modem standards were changing rapidly. Both DSP and pure software modems offer this feature.
A downside of either type of softmodem is that drivers must be provided, and the terms "softmodem" and "winmodem" have gained negative connotations, particularly within the open-source community, due to drivers for Linux often being omitted or provided only as unmaintainable binaries.
While DSP-based softmodems usually only require host attention during startup, pure software modems consume some CPU cycles on the host, which can conceivably slow down application software on older computers. This was a major issue in the 1990s, when CPUs were not nearly as powerful as today's typical hardware.
DSL softmodems
Although "softmodem" typically applies to PSTN modems, there are some software-based DSL modems or even routers, which work on the same principles but at higher bandwidth and more complex encoding schemes. One of the first software based DSL modem chipsets was Motorola's SoftDSL chipset.
The term WinDSL has been coined to describe this kind of technology. DSL softmodems generally require the same interfaces as PSTN softmodems, such as USB or PCI.
See also
Baseband processor
Geoport
Software-defined radio (SDR)
Winprinter
References
External links
A review of the differences between software-based modems and hardware-based modems
Modems and their chipsets lists
Modems |
25920608 | https://en.wikipedia.org/wiki/1968%20Rose%20Bowl | 1968 Rose Bowl | The 1968 Rose Bowl was the 54th edition of the college football bowl game, played at the Rose Bowl in Pasadena, California, on Monday, January 1. The USC Trojans of the Pacific-8 Conference defeated the Indiana Hoosiers of the Big Ten Conference, 14–3. USC tailback O. J. Simpson was named the Player of the Game.
Teams
Through , this remains the only Rose Bowl appearance for Indiana. USC was a two touchdown favorite; this was the first Rose Bowl in fifteen years in which the West Coast team was favored. In the intervening fourteen games, the Big Ten had won ten and lost four (1960, 1961, 1963, 1966).
Being an even-numbered year for the bowl game, Indiana wore their crimson jerseys as the home team and USC wore their white shirts as the designated visitors.
USC
The top-ranked and Pac-8 champion Trojans came into the game with a 9–1 record, losing only at Oregon State in the November mud in a close 3–0 game. They fell to fourth in the AP poll, then reclaimed the top spot a week later after a close 21–20 win over rival and then-#1 UCLA in their heavily-anticipated conference finale, securing another trip to the Big Ten/Pac-8 classic. Runner-up Oregon State had a conference loss (at Washington) and a tie (at UCLA), and the deflated UCLA Bruins lost again the following week 32–14 at home to non-conference Syracuse.
The Trojans were led by their powerful junior tailback O. J. Simpson, a junior college transfer from San Francisco. Unlike the Big Ten and the old Pacific Coast Conference, the Pac-8 did not have a "no-repeat" rule; this was the second of four consecutive Rose Bowl appearances for the Trojans.
Indiana
The fourth-ranked and co-Big Ten champion Hoosiers also came into the game with a 9–1 record, losing to Minnesota, a week before defeating Purdue. A three-way league title championship was created when all three finished with 6–1 league records, each defeating and losing to one of the other. Purdue was ineligible because of the "no-repeat" rule by the Big Ten and the "Rose Bowl or no bowl" rule enforced by both of the participating conferences (Big Ten and AAWU). Purdue had played in Pasadena the previous year, beating USC by a point, 14–13.
The conference's athletic directors voted to award the Rose Bowl bid to Indiana over Minnesota, albeit not unanimously. Indiana was considered the logical choice because they were the only Big Ten school yet to appear in the game. Minnesota coach Murray Warmath argued in vain that the Gophers deserved the bid because their prior two Rose Bowl teams, after the 1960 and 1961 seasons, received at-large bids because there was no agreement between the Big Ten and the Rose Bowl at the time; thus, technically, the Gophers never had received a Rose Bowl bid pursuant to that arrangement. Ironically, if Purdue had beaten Indiana in the season finale, the Boilermakers would have had sole possession of the conference championship, but Minnesota presumably would have received the Rose Bowl bid as the second place team in lieu of the ineligible Boilers. Instead, Indiana scored a 19–14 upset over Purdue, giving Minnesota a share of the conference championship but costing them a trip to Pasadena. Quarterback Harry Gonso led the Hoosiers into their first ever bowl game.
Scoring
First quarter
USC - O. J. Simpson 2-yard run (Rikki Aldridge kick)
Second quarter
Indiana - Dave Kornowa 27-yard field goal
Third quarter
USC - Simpson 8-yard run (Aldridge kick)
Fourth quarter
No scoring
References
Rose Bowl
Rose Bowl Game
Indiana Hoosiers football bowl games
USC Trojans football bowl games
January 1968 sports events in the United States
Rose Bowl
O. J. Simpson |
34866612 | https://en.wikipedia.org/wiki/2012%20California%20Golden%20Bears%20football%20team | 2012 California Golden Bears football team | The 2012 California Golden Bears football team represented University of California, Berkeley in the 2012 NCAA Division I FBS college football season. The Bears were led by eleventh-year head coach Jeff Tedford and played their home games at Memorial Stadium after having played at home the previous season at AT&T Park due to reconstruction on Memorial Stadium. They were members of the North Division of the Pac-12 Conference.
Coming off a 7–5 previous season, the Bears fell to 3–9 (2–7 in the Pac-12), the second losing season in three years and the worst of the Tedford era. Despite a decisive win over eventual Pac-12 South winner UCLA, Cal closed out the season with five consecutive losses. While wide receiver Keenan Allen became the team's all time leader in career receptions, no receiver posted a 1,000 yard season and no running backs broke the 1,000-yard rushing mark. Tedford was subsequently fired as head coach on November 20.
Roster
Depth chart
Coaching staff
Jeff Tedford – Head Coach – 11th year
Jim Michalczik – Offensive Coordinator/Offensive Line – 9th year
Clancy Pendergast – Defensive Coordinator – 3rd year
Jeff Genyk – Special Teams/Tight Ends – 3rd year
Marcus Arroyo – Quarterbacks – 2nd year
Kenwick Thompson – Assistant Head Coach/Linebackers – 6th year
Todd Howard – Defensive Line – 1st year
Ron Gould – Associate Head Coach/Run Game Coordinator – 16th year
Wes Chandler – Wide Receivers – 1st year
Ashley Ambrose – Defensive Backs – 2nd year
Ryan McKinley – Defensive Graduate Assistant – 2nd year
Ben Steele – Offensive Administrative Assistant – 2nd year
Schedule
Game summaries
Nevada
The Bears reopened California Memorial Stadium with over 11,000 fewer seats on September 1 with a loss to the Nevada Wolf Pack. It was the first home game for the Bears at Memorial Stadium since November 20, 2010 when they took on the Washington Huskies, and to commemorate the occasion the Cal athletic department had a ribbon-cutting ceremony planned before the game. The last time Cal faced Nevada was on September 17, 2010 at Mackay Stadium in Reno, where they lost 52–32. The last time the Bears defeated the Wolf Pack was a 33–15 game in Berkeley. Cal however, had a large advantage in the all-time series against Nevada with a record of 22–2–1 with all of the games being played in Berkeley except for the 2010 game and another meeting back in 1915.
Cal quarterback Zach Maynard was benched for the first three series of the game as punishment for missing a tutoring session earlier during the summer and Allan Bridgford started in his stead. Nevada jumped out to an early lead with an 80 yard scoring drive capped off by a 2-yard run by Stefphon Jefferson. Quarterback Cody Fajardo had a 45-yard run at the end of the quarter to put the Wolf Pack up 14–0 as the quarter wound down. The Bears got on the board in the second quarter with a 37-yard reception by receiver Bryce Treggs. A 31-yard field goal attempt in the final seconds of the quarter missed, making it 14–7 Nevada at the half.
Jefferson had his second touchdown run of the game on a 2-yard run on the Wolf Pack's second possession of the third quarter. The Bears responded on the following drive when receiver Keenan Allen was able to score on a 39-yard run and Cal was able to convert a Nevada fumble recovered on the kick off with a 40-yard field goal. However a Maynard fumble in Nevada territory in the beginning of the quarter led to a 39-yard field goal by the Wolf Pack. Cal tied the game on the ensuing possession with a 13-yard reception by receiver Chris Harper. However Nevada put the game away with a 2-yard run by Jefferson in the final minute and recovered a second Cal fumble on the game's final drive.
Nevada quarterback Cody Fajardo threw for 230 yards and ran for 97, including one of the Wolf Pack's four touchdowns. Running back Stefphon Jefferson accounted for the other three with 145 yards on the ground. Zach Maynard passed for 247 yards and two scores, while the Bears put up a total of 110 yards on the ground, half of Nevada's.
Southern Utah
On September 8 California hosted the Southern Utah Thunderbirds for the first time in program history. The Thunderbirds are members of the Big Sky Conference and are part of the Football Championship Subdivision (FCS). The Bears committed two turnovers in the first quarter but only the second was converted into points in the form of a 40-yard field goal. Cal tied with an 18-yard field goal to open the second quarter and added a pair of touchdowns on a 6-yard run by C.J. Anderson and a 12-yard run by Isi Sofele. A 27-yard field goal came on Cal's final series of the half with Southern Utah adding a touchdown on a 37-yard pass From Brad Sorensen to receiver Cameron Morgan as time expired to make it 20–10 Cal at the half.
The only points in the third quarter came on a 5-yard reception to Southern Utah running back Henna Brown. The fourth quarter saw an explosion in scoring by Cal, leading off with a 19-yard scoring reception by Keenan Allen. Cornerback Marc Anthony then intercepted Sorensen on the ensuing drive for a 61-yard score. A 47-yard field goal was then followed up with a 69-yard punt return for a touchdown by Allen. The Thunderbirds added an 8-yard reception for a touchdown by defensive back Brian Wilson and a 7-yard scoring reception by receiver Fatu Moala, sandwiching a 77-yard touchdown run by Cal running back Daniel Lasco with the PAT missing.
Southern Utah's Brad Sorensen passed for 292 yards and four touchdowns with one pick, with four different receivers catching scores while the ground game was held to 79 yards. Zach Maynard threw for 229 yards, a touchdown and an interception. Isi Sofele had his first 100 yard game of the season with 104 as Cal rushed for 289 yards.
Ohio State
The Bears' game on September 15 against the Ohio State Buckeyes was the first meeting between the two schools since 1972 and was California's final non-conference game of the season. California and Ohio State have met six times in the past with the Buckeyes winning five of the six with California's only win the series coming in the 1921 Rose Bowl during one the Bears' five claimed national championship seasons. This game was the first of a home and home series with Ohio State scheduled to visit Berkeley in 2013.
Ohio State scored first on a 55-yard run by quarterback Braxton Miller with the PAT missing. Cal responded with a 19-yard reception by receiver Chris Harper. The Buckeyes came right back with a 25-yard scoring reception by receiver Devin Smith. The sole score of the second quarter was a 1-yard reception by receiver Jake Stoneburner. A 40-yard field goal attempt by the Bears missed to make it 20–7 Ohio State at the half.
Cal scored in the third quarter with an 81-yard run by running back Brendan Bigelow with a 42-yard field goal attempt missing. Maynard had a 1-yard run to open the fourth quarter and the Buckeyes added a 3-yard reception by Stoneburner with Miller successfully rushing for the two-point conversion. Bigelow had his second rushing touchdown of the game with a 59-yard run. Miller was intercepted on the following drive but Cal failed to capitalize on it when a 42-yard field goal missed. The go ahead score came with a 72-yard scoring reception by Smith and Ohio State picked off Maynard on the ensuing drive to hold off the Bears.
Ohio State's Braxton Miller threw for 249 yards and four scores with an interception and receiver Devin Smith had 145 yards with two touchdown receptions. Zach Maynard passed for 280 yards, a score and a pick, and running back Brandon Bigelow had 160 yards on the ground with two scores.
USC
California will travel to the Los Angeles Memorial Coliseum on September 22 to face the USC Trojans for the two schools' 100th meeting. The Trojans lead the all-time series 66–29–5 with the Trojans winning the last meeting 30–9 in San Francisco. The last California win in the series came in 2003 after the Bears defeated the third ranked Trojans 34–31 in three overtimes.
1st quarter scoring: USC – Silas Redd 33-yard run (Andre Heidari kick)
2nd quarter scoring: CAL – Vincen D'Amato 24-yard field goal; USC – Marqise Lee 11-yard pass from Matt Barkley (Heidari kick); USC – Heidari 40-yard field goal
3rd quarter scoring: CAL – D'Amato 26-yard field goal; CAL – D'Amato 35-yard field goal
4th quarter scoring: USC – Heidari 41-yard field goal; USC – Lee 3-yard pass from Barkley (Heidari kick)
Arizona State
California will meet the Arizona State Sun Devils on September 29 at California Memorial Stadium for the first Pac-12 home game of the season. The Golden Bears lead the all-time series 17–14 with California winning the last meeting 47–38 in Tempe. Under head coach Jeff Tedford, California has gone 8–1 against the Sun Devils and have won the last four meetings.
1st quarter scoring: ARIZ – Darwin Rogers 1-yard pass from Taylor Kelly (Alex Garoutte kick).
2nd quarter scoring: CAL – Isi Sofele 24-yard run (Vincenzo D'Amato kick); ARIZ – Garoutte 28-yard field goal; ARIZ – Kevin Ozier 9-yard pass from Kelly (Garoutte kick).
3rd quarter scoring: ARIZ – Garoutte 33-yard field goal; CAL – D'Amato 35-yard field goal.
4th quarter scoring: CAL – Keenan Allen 10-pass from Zach Maynard (D'Amato Kick); ARIZ – Ozier 22-yard pass from Kelly (Garoutte Kick)
UCLA
California will meet the UCLA Bruins on October 6 at California Memorial Stadium for the University of California's annual Joe Roth Memorial game and homecoming game. The Bruins lead the all-time series 50–31–1 with UCLA winning the last meeting 31–14 in Pasadena. The Bruins, however, have not won in Berkeley since the 1998 season with the last California victory coming in 2010 at Memorial Stadium. The California athletic department has scheduled the official rededication of the stadium during halftime with a stadium-wide card stunt and a combined halftime show with the University of California Marching Band and the UCLA Bruin Marching Band. This will be the first combined show with the two bands in decades. The stadium, which was originally dedicated as a memorial to the lives of Californian who lost their lives in World War I, will be officially rededicated in memory of all of the Californians who have lost their lives in war. Terry Leyden is the referee for the game.
1st quarter scoring: UCLA – Cassius Marsh 4-yard pass from Brett Hundley (Ka'i Fairbairn kick); CAL – D'Amato, Vincen 26-yard field goal.
2nd quarter scoring: CAL – C. J. Anderson 5-yard pass from Zach Maynard (D'Amato kick); CAL – Keenan Allen 8-yard pass from Maynard (D'Amato kick blockdd)
3rd quarter scoring: CAL – Brendan Bigelow 32-yard pass from Maynard (D'Amato kick); UCLA – Joseph Fauria 3-yard pass from Hundley (Fairbairn kick); CAL – Allen 34-yard pass from Maynard (D'Amato kick blockdd)
4th quarter scoring: UCLA – Fairbairn 29-yard field goal; CAL – Maynard 1-yard run (D'Amato kick); CAL – Anderson 68-yard run (D'Amato kick).
Washington State
California traveled to Martin Stadium on October 13 to face the Washington State Cougars for the Bears' first Pac-12 North divisional opponent. The Golden Bears lead the all-time series 43–25–5 with California winning the last meeting 30–7 in San Francisco. California has won seven straight games against the Cougars with the last WSU win coming in 2002.
Stanford
For the first time since 1892, the annual Big Game between the California Golden Bears and the Stanford Cardinal was not be played at the end of the season in either November or December. The new Pac-12 television deal that was signed in 2011 has been faulted for the move because it has created many scheduling issues. Also, both universities have refused to play their rivalry game on the Saturday after Thanksgiving. The reasoning for not wanting the Big Game after Thanksgiving is that many students are out of town for the holiday and that because of the short week, many longstanding events that are performed throughout the week leading up to the game would not be possible. Because it is an even numbered year, Stanford will travel to California Memorial Stadium for the 115th Big Game and the outcome will determine who wins the Stanford Axe. Jeff Tedford, however, in his career at California has won seven of the last ten Big Games.
Utah
California traveled to the Rice-Eccles Stadium on October 27 to face the Utah Utes for the Bears' first trip to Salt Lake City with Utah as a conference opponent. The Golden Bears lead the all-time series 5–3 with California winning the last meeting 34–10 in San Francisco. The last time California travelled to Salt Lake City to face the Utes, the all-time attendance record at Rice-Eccles Stadium (46,768) was set.
Utah senior running back Reggie Dunn set an NCAA record with two 100-yard kickoff returns for touchdowns.
Washington
California met the Washington Huskies on November 2 at California Memorial Stadium for a Friday night, primetime matchup on ESPN2. The Huskies lead the all-time series 49–38–4 with Washington winning the last meeting 31–23 in Seattle. The last California win came during the 2008 season and prior to that, the Bears won five straight against the Huskies from 2002 to 2006.
Oregon
California met the Oregon Ducks on November 10 at California Memorial Stadium for a meeting with the preseason, north division favorites. The Golden Bears lead the all-time series 40–32–2 with Oregon winning the last meeting 43–15 in Eugene. Under head coach Jeff Tedford, the Bears have only lost once to the Ducks in Berkeley. The only home defeat came in 2010 after an incredibly close 15–13 loss to the then-#1 ranked Ducks.
1st quarter scoring: ORE – Colt Lyerla 10 Yd Pass From Marcus Mariota (Alejandro Maldonado Kick); CAL – Darius Powe 10 Yd Pass From Allan Bridgford (Vincenzo D'Amato Kick); ORE – Byron Marshall 3 Yd Run (Maldonado Kick)
2nd quarter scoring: CAL – D'Amato 27 Yd Field Goal; ORE – Maldonado 26 Yd Field Goal; ORE – Josh Huff 10 Yd Pass From Marcus Mariota (Maldonado Kick)
3rd quarter scoring: CAL – Isi Sofele 4 Yd Run (D'Amato Kick); ORE – Josh Huff 35 Yd Pass From Marcus Mariota (Maldonado Kick); ORE – Josh Huff 39 Yd Pass From Marcus Mariota (Maldonado Kick)
4th quarter scoring: ORE – Colt Lyerla 14 Yd Pass From Marcus Mariota (Maldonado Kick); ORE – Will Murphy 7 Yd Pass From Marcus Mariota (Maldonado Kick); ORE – B.J. Kelley 18 Yd Pass From Bryan Bennett (Maldonado Kick)
Oregon State
California traveled to the Reser Stadium to face the Oregon State Beavers for the Bears' final game of the regular season. The Golden Bears led the all-time series 34–30–0 with California winning the last meeting 23–6 in San Francisco. The two schools have split their last two meetings and in the last ten years, Oregon State has compiled a 7–3 record against California since 2005.
1st quarter scoring: ORST – Markus Wheaton 11 Yd Pass From Sean Mannion (Trevor Romaine Kick); CAL – Isi Sofele 9 Yd Run (Vincenzo D'Amato Kick); ORST – Tyler Anderson 1 Yd Run (Trevor Romaine Kick)
2nd quarter scoring: ORST – Brandin Cooks 48 Yd Pass From Sean Mannion (Trevor Romaine Kick); ORST – Connor Hamlett 14 Yd Pass From Sean Mannion (Trevor Romaine Kick); ORST – Micah Hatfield 6 Yd Pass From Sean Mannion (Trevor Romaine Kick)
3rd quarter scoring: ORST – Storm Woods 1 Yd Run (Trevor Romaine Kick); ORST – Terron Ward 47 Yd Run (Trevor Romaine Kick); CAL – Allan Bridgford 1 Yd Run (Vincenzo D'Amato Kick)
4th quarter scoring: ORST – Terron Ward 17 Yd Run (Pat Failed); ORST – Malcolm Agnew 8 Yd Pass From Richie Harrington (Trevor Romaine Kick)
Postseason
Three days after the close of the season Tedford was let go on November 20. On December 5, Louisiana Tech head coach Sonny Dykes was announced as his successor. On the same day, wide receiver Keenan Allen, who had become Cal's all-time leader in receptions during the season, announced that he would forgo his senior season and enter the 2013 NFL Draft. Several members of Dykes' coaching staff at Louisiana Tech joined him at Cal for the same positions they had coached with the Bulldogs: offensive coordinator Tony Franklin, assistant head coach/wide receivers coach Rob Likens, running backs coach Pierre Ingram, and special teams coordinator/inside receivers coach Mark Tommerdahl. Longtime running backs coach Ron Gould, who had been with the program since 1997 and served under Tedford's predecessor, Tom Holmoe, left to become the head coach at UC Davis.
Rankings
Statistics
Scores by quarter (all opponents)
Scores by quarter (Pac-12 opponents)
References
California
California Golden Bears football seasons
California Golden Bears football |
1885284 | https://en.wikipedia.org/wiki/Tradecraft | Tradecraft | Tradecraft, within the intelligence community, refers to the techniques, methods and technologies used in modern espionage (spying) and generally, as part of the activity of intelligence assessment. This includes general topics or techniques (dead drops, for example), or the specific techniques of a nation or organization (the particular form of encryption (encoding) used by the National Security Agency, for example).
Examples
Agent handling is the management of espionage agents, principal agents, and agent networks (called "assets") by intelligence officers, who are typically known as case officers.
Analytic tradecraft is the body of specific methods for intelligence analysis.
Black bag operations are covert or clandestine entries into structures or locations to obtain information for human intelligence operations. This may require breaking and entering, lock picking, safe cracking, key impressions, fingerprinting, photography, electronic surveillance (including audio and video surveillance), mail manipulation ("flaps and seals"), forgery, and a host of other related skills.
Concealment devices are used to hide things for the purpose of secrecy or security. Examples in espionage include dead drop spikes for transferring notes or small items to other people, and hollowed-out coins or teeth for concealing suicide pills.
Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries). More generally, it is about constructing and analyzing communications protocols that block adversaries.
A cut-out is a mutually trusted intermediary, method or channel of communication, facilitating the exchange of information between agents. People playing the role of cutouts usually only know the source and destination of the information to be transmitted, but are unaware of the identities of any other persons involved in the espionage process. Thus, a captured cutout cannot be used to identify members of an espionage cell.
A dead drop or "dead letter box" is a method of espionage tradecraft used to pass items between two individuals using a secret location and thus does not require them to meet directly. Using a dead drop permits a case officer and agent to exchange objects and information while maintaining operational security. The method stands in contrast to the 'live drop', so-called because two persons meet to exchange items or information.
"Drycleaning" is a countersurveillance technique for discerning how many "tails" (following enemy agents) an agent is being followed by, and by moving about, seemingly oblivious to being tailed, perhaps losing some or all of those doing surveillance.
Eavesdropping is secretly listening to the conversation of others without their consent, typically using a hidden microphone or a "bugged" or "tapped" phone line.
False flag operations is a covert military or paramilitary operation designed to deceive in such a way that the operations appear as though they are being carried out by entities, groups, or nations other than those who actually planned and executed them. Operations carried out during peace-time by civilian organizations, as well as covert government agencies, may by extension be called false flag.
A front organization is any entity set up by and controlled by another organization, such as intelligence agencies. Front organizations can act for the parent group without the actions being attributed to the parent group. A front organization may appear to be a business, a foundation, or another organization.
A honey trap is a deceptive operation in which an attractive agent lures a targeted person into a romantic liaison and encourages them to divulge secret information during or after a sexual encounter.
Interrogation is a type of interviewing employed by officers of the police, military, and intelligence agencies with the goal of eliciting useful information from an uncooperative suspect. Interrogation may involve a diverse array of techniques, ranging from developing a rapport with the subject, to repeated questions, to sleep deprivation or, in some countries, torture.
A legend refers to a person with a well-prepared and credible made-up identity (cover background) who may attempt to infiltrate a target organization, as opposed to recruiting a pre-existing employee whose knowledge can be exploited.
A limited hangout is a partial admission of wrongdoing, with the intent of shutting down the further inquiry.
A microdot is text or an image substantially reduced in size onto a small disc to prevent detection by unintended recipients or officials who are searching for them. Microdots are, fundamentally, a steganographic approach to message protection. In Germany after the Berlin Wall was erected, special cameras were used to generate microdots that were then adhered to letters and sent through the mail. These microdots often went unnoticed by inspectors, and information could be read by the intended recipient using a microscope.
A one-time pad is an encryption technique that cannot be cracked if used correctly. In this technique, a plaintext is paired with random, secret key (or pad).
One-way voice link is typically a radio-based communication method used by spy networks to communicate with agents in the field typically (but not exclusively) using shortwave radio frequencies. Since the 1970s infrared point to point communication systems have been used that offer one-way voice links , but the number of users was always limited. A numbers station is an example of a one-way voice link, often broadcasting to a field agent who may already know the intended meaning of the code, or use a one-time pad to decode. These numbers stations will continue to broadcast gibberish or random messages according to their usual schedule; this is done to expend the resources of one's adversaries as they try in vain to make sense of the data, and to avoid revealing the purpose of the station or activity of agents by broadcasting solely when needed.
Steganography is the art or practice of concealing a message, image, or file within another message, image, or file. Generally, the hidden message will appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—will arouse interest, and may in themselves be incriminating in countries where encryption is illegal.
Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people for the purpose of influencing, managing, directing, or protecting them. This can include observation from a distance by means of electronic equipment (such as CCTV cameras), or interception of electronically transmitted information (such as Internet traffic or phone calls); and it can include simple, relatively no- or low-technology methods such as human intelligence agents watching a person and postal interception. The word surveillance comes from a French phrase for "watching over" ("sur" means "from above" and "veiller" means "to watch").
TEMPEST is a National Security Agency specification and NATO certification referring to spying on information systems through compromising emanations such as unintentional radio or electrical signals, sounds, and vibrations. TEMPEST covers both methods to spy upon others and also how to shield equipment against such spying. The protection efforts are also known as emission security (EMSEC), which is a subset of communications security (COMSEC).
In popular culture
In books
In the books of such authors as thriller writer Grant Blackwood, espionage writer Tom Clancy, and spy novelists Ian Fleming and John le Carré, characters frequently engage in tradecraft, e.g., making or retrieving items from "dead drops", "dry cleaning", and wiring, using, or sweeping for intelligence gathering devices, such as cameras or microphones hidden in the subjects' quarters, vehicles, clothing, or accessories.
In film
In the 2012 film Zero Dark Thirty, the main CIA operative Maya noted that her suspected senior al-Qaeda courier was exhibiting signs of using tradecraft.
In the 2006 action thriller motion picture Mission: Impossible III, an operative hid a microdot on the back of a postage stamp. The microdot contained a magnetically stored video file.
In the 2003 sci-fi film Paycheck, a microdot is a key plot element; the film shows how well a microdot can be made to blend into an environment and how much information such a dot can carry.
See also
Clandestine HUMINT operational techniques
United States Geospatial Intelligence Foundation
References
Further reading
Dhar, M.K. Intelligence Trade Craft: Secrets of Spy Warfare. , 2011.
Jenkins, Peter, Surveillance Tradecraft, , Intel Publishing UK, 2010.
Topalian, Paul Charles. Tradecraft Primer: A Framework for Aspiring Interrogators. CRC Press, 2016.
External links
Tradecraft Notes - via Professor J. Ransom Clark, Muskingum College
Espionage techniques |
399678 | https://en.wikipedia.org/wiki/Fermi%20Gamma-ray%20Space%20Telescope | Fermi Gamma-ray Space Telescope | The Fermi Gamma-ray Space Telescope (FGST, also FGRST), formerly called the Gamma-ray Large Area Space Telescope (GLAST), is a space observatory being used to perform gamma-ray astronomy observations from low Earth orbit. Its main instrument is the Large Area Telescope (LAT), with which astronomers mostly intend to perform an all-sky survey studying astrophysical and cosmological phenomena such as active galactic nuclei, pulsars, other high-energy sources and dark matter. Another instrument aboard Fermi, the Gamma-ray Burst Monitor (GBM; formerly GLAST Burst Monitor), is being used to study gamma-ray bursts and solar flares.
Fermi was launched on 11 June 2008 at 16:05 UTC aboard a Delta II 7920-H rocket. The mission is a joint venture of NASA, the United States Department of Energy, and government agencies in France, Germany, Italy, Japan, and Sweden, becoming the most sensitive gamma-ray telescope on orbit, succeeding INTEGRAL. The project is a recognized CERN experiment (RE7).
Overview
Fermi includes two scientific instruments, the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM).
The LAT is an imaging gamma-ray detector (a pair-conversion instrument) which detects photons with energy from about 20 million to about 300 billion electronvolts (20 MeV to 300 GeV), with a field of view of about 20% of the sky; it may be thought of as a sequel to the EGRET instrument on the Compton Gamma Ray Observatory.
The GBM consists of 14 scintillation detectors (twelve sodium iodide crystals for the 8 keV to 1 MeV range and two bismuth germanate crystals with sensitivity from 150 keV to 30 MeV), and can detect gamma-ray bursts in that energy range across the whole of the sky not occluded by the Earth.
General Dynamics Advanced Information Systems (formerly Spectrum Astro and now Orbital Sciences) in Gilbert, Arizona, designed and built the spacecraft that carries the instruments. It travels in a low, circular orbit with a period of about 95 minutes. Its normal mode of operation maintains its orientation so that the instruments will look away from the Earth, with a "rocking" motion to equalize the coverage of the sky. The view of the instruments will sweep out across most of the sky about 16 times per day. The spacecraft can also maintain an orientation that points to a chosen target.
Both science instruments underwent environmental testing, including vibration, vacuum, and high and low temperatures to ensure that they can withstand the stresses of launch and continue to operate in space. They were integrated with the spacecraft at the General Dynamics ASCENT facility in Gilbert, Arizona.
Data from the instruments are available to the public through the Fermi Science Support Center web site. Software for analyzing the data is also available.
GLAST renamed Fermi Gamma-ray Space Telescope
NASA's Alan Stern, associate administrator for Science at NASA Headquarters, launched a public competition 7 February 2008, closing 31 March 2008, to rename GLAST in a way that would "capture the excitement of GLAST's mission and call attention to gamma-ray and high-energy astronomy ... something memorable to commemorate this spectacular new astronomy mission ... a name that is catchy, easy to say and will help make the satellite and its mission a topic of dinner table and classroom discussion".
Fermi gained its new name in 2008: On 26 August 2008, GLAST was renamed the "Fermi Gamma-ray Space Telescope" in honor of Enrico Fermi, a pioneer in high-energy physics.
Mission
NASA designed the mission with a five-year lifetime, with a goal of ten years of operations.
The key scientific objectives of the Fermi mission have been described as:
To understand the mechanisms of particle acceleration in active galactic nuclei (AGN), pulsars, and supernova remnants (SNR).
Resolve the gamma-ray sky: unidentified sources and diffuse emission.
Determine the high-energy behavior of gamma-ray bursts and transients.
Probe dark matter (e.g. by looking for an excess of gamma rays from the center of the Milky Way) and early Universe.
Search for evaporating primordial micro black holes (MBH) from their presumed gamma burst signatures (Hawking Radiation component).
The National Academies of Sciences ranked this mission as a top priority. Many new possibilities and discoveries are anticipated to emerge from this single mission and greatly expand our view of the Universe.
Blazars and active galaxies
Study energy spectra and variability of wavelengths of light coming from blazars so as to determine the composition of the black hole jets aimed directly at Earth -- whether they are
(a) a combination of electrons and positrons or
(b) only protons.
Gamma-ray bursts
Study gamma-ray bursts with an energy range several times more intense than ever before so that scientists may be able to understand them better.
Neutron stars
Study younger, more energetic pulsars in the Milky Way than ever before so as to broaden our understanding of stars. Study the pulsed emissions of magnetospheres so as to possibly solve how they are produced. Study how pulsars generate winds of interstellar particles.
Milky Way galaxy
Provide new data to help improve upon existing theoretical models of our own galaxy.
Gamma-ray background radiation
Study better than ever before whether ordinary galaxies are responsible for gamma-ray background radiation. The potential for a tremendous discovery awaits if ordinary sources are determined to be irresponsible, in which case the cause may be anything from self-annihilating dark matter to entirely new chain reactions among interstellar particles that have yet to be conceived.
The early universe
Study better than ever before how concentrations of visible and ultraviolet light change over time. The mission should easily detect regions of spacetime where gamma-rays interacted with visible or UV light to make matter. This can be seen as an example of E=mc2 working in reverse, where energy is converted into mass, in the early universe.
Sun
Study better than ever before how our own Sun produces gamma rays in solar flares.
Dark matter
Search for evidence that dark matter is made up of weakly interacting massive particles, complementing similar experiments already planned for the Large Hadron Collider as well as other underground detectors. The potential for a tremendous discovery in this area is possible over the next several years.
Fundamental physics
Test better than ever before certain established theories of physics, such as whether the speed of light in vacuum remains constant regardless of wavelength. Einstein's general theory of relativity contends that it does, yet some models in quantum mechanics and quantum gravity predict that it may not. Search for gamma rays emanating from former black holes that once exploded, providing yet another potential step toward the unification of quantum mechanics and general relativity. Determine whether photons naturally split into smaller photons, as predicted by quantum mechanics and already achieved under controlled, man-made experimental conditions.
Unknown discoveries
Scientists estimate a very high possibility for new scientific discoveries, even revolutionary discoveries, emerging from this single mission.
Mission timeline
Prelaunch
On 4 March 2008, the spacecraft arrived at the Astrotech payload processing facility in Titusville, Florida. On 4 June 2008, after several previous delays, launch status was retargeted for 11 June at the earliest, the last delays resulting from the need to replace the Flight Termination System batteries. The launch window extended from 15:45 to 17:40 UTC daily, until 7 August 2008.
Launch
Launch occurred successfully on 11 June 2008 at 16:05 UTC aboard a Delta 7920H-10C rocket from Cape Canaveral Air Force Station Space Launch Complex 17-B. Spacecraft separation took place about 75 minutes after launch.
Orbit
Fermi resides in a low-Earth circular orbit at an altitude of , and at an inclination of 28.5 degrees.
Software modifications
GLAST received some minor modifications to its computer software on 23 June 2008.
LAT/GBM computers operational
Computers operating both the LAT and GBM and most of the LAT's components were turned on 24 June 2008. The LAT high voltage was turned on 25 June, and it began detecting high-energy particles from space, but minor adjustments were still needed to calibrate the instrument. The GBM high voltage was also turned on 25 June, but the GBM still required one more week of testing/calibrations before searching for gamma-ray bursts.
Sky survey mode
After presenting an overview of the Fermi instrumentation and goals, Jennifer Carson of SLAC National Accelerator Laboratory had concluded that the primary goals were "all achievable with the all-sky scanning mode of observing". Fermi switched to "sky survey mode" on 26 June 2008 so as to begin sweeping its field of view over the entire sky every three hours (every two orbits).
Collision avoided
On 30 April 2013, NASA revealed that the telescope had narrowly avoided a collision a year earlier with a defunct Cold War-era Soviet spy satellite, Kosmos 1805, in April 2012. Orbital predictions several days earlier indicated that the two satellites were expected to occupy the same point in space within 30 milliseconds of each other. On 3 April, telescope operators decided to stow the satellite's high-gain parabolic antenna, rotate the solar panels out of the way and to fire Fermi's rocket thrusters for one second to move it out of the way. Even though the thrusters had been idle since the telescope had been placed in orbit nearly five years earlier, they worked correctly and potential disaster was thus avoided.
Extended mission 2013-2018
In August 2013 Fermi started its 5-year mission extension.
Pass 8 software upgrade
In June 2015, the Fermi LAT Collaboration released "Pass 8 LAT data". Iterations of the analysis framework used by LAT are called "passes" and at launch Fermi LAT data was analyzed using Pass 6. Significant improvements to Pass 6 were included in Pass 7 which debuted in August 2011.
Every detection by the Fermi LAT since its launch, was reexamined with the latest tools to learn how the LAT detector responded to both each event and to the background. This improved understanding led to two major improvements: gamma-rays that had been missed by previous analysis were detected and the direction they arrived from was determined with greater accuracy. The impact of the latter is to sharpen Fermi LAT's vision as illustrated in the figure on the right. Pass 8 also delivers better energy measurements and a significantly increased effective area. The entire mission dataset was reprocessed.
These improvements have the greatest impact on both the low and high ends of the range of energy Fermi LAT can detect - in effect expanding the energy range within which LAT can make useful observations. The improvement in the performance of Fermi LAT due to Pass 8 is so dramatic that this software update is sometimes called the cheapest satellite upgrade in history. Among numerous advances, it allowed for a better search for Galactic spectral lines from dark matter interactions, analysis of extended supernova remnants, and to search for extended sources in the Galactic plane.
For almost all event classes, Version P8R2 had a residual background that was not fully isotropic. This anisotropy was traced to cosmic-ray electrons leaking through the ribbons of the Anti-Coincidence Detector and a set of cuts allowed rejection of these events while minimally impacting acceptance. This selection was used to create the P8R3 version of LAT data.
Solar array drive failure
On 16 March 2018 one of Fermi's solar arrays quit rotating, prompting a transition to "safe hold" mode and instrument power off. This was the first mechanical failure in nearly 10 years. Fermi's solar arrays rotate to maximize the exposure of the arrays to the Sun. The motor that drives that rotation failed to move as instructed in one direction. On 27 March, the satellite was placed at a fixed angle relative to its orbit to maximize solar power. The next day the GBM instrument was turned back on. On 2 April, operators turned LAT on and it resumed operations on 8 April. Alternative observation strategies are being developed due to power and thermal requirements.
Further extension into 2022
In 2019, a NASA Senior Review concluded that Fermi should continue to be operated into 2022, a decision that was subsequently approved by NASA. Further extensions remain possible.
Discoveries
Pulsar discovery
The first major discovery came when the space telescope detected a pulsar in the CTA 1 supernova remnant that appeared to emit radiation in the gamma ray bands only, a first for its kind. This new pulsar sweeps the Earth every 316.86 milliseconds and is about 4,600 light-years away.
Greatest GRB energy release
In September 2008, the gamma-ray burst GRB 080916C in the constellation Carina was recorded by the Fermi telescope. This burst is notable as having "the largest apparent energy release yet measured". The explosion had the power of about 9,000 ordinary supernovae, and the relativistic jet of material ejected in the blast must have moved at a minimum of 99.9999% the speed of light. Overall, GRB 080916C had "the greatest total energy, the fastest motions, and the highest initial-energy emissions" ever seen.
Cosmic rays and supernova remnants
In February 2010, it was announced that Fermi-LAT had determined that supernova remnants act as enormous accelerators for cosmic particles. This determination fulfills one of the stated missions for this project.
Background gamma ray sources
In March 2010 it was announced that active galactic nuclei are not responsible for most gamma-ray background radiation. Though active galactic nuclei do produce some of the gamma-ray radiation detected here on Earth, less than 30% originates from these sources. The search now is to locate the sources for the remaining 70% or so of all gamma-rays detected. Possibilities include star forming galaxies, galactic mergers, and yet-to-be explained dark matter interactions.
Milky Way Gamma- and X-ray emitting Fermi bubbles
In November 2010, it was announced that two gamma-ray and X-ray emitting bubbles were detected around Earth's and the Solar System's host galaxy, the Milky Way. The bubbles, named Fermi bubbles, extend about 25 thousand light-years distant above and below the galactic center. The galaxy's diffuse gamma-ray fog hampered prior observations, but the discovery team led by D. Finkbeiner, building on research by G. Dobler, worked around this problem.
Highest energy light ever seen from the Sun
In early 2012, Fermi/GLAST observed the highest energy light ever seen in a solar eruption.
Terrestrial gamma-ray flash observations
Fermi telescope has observed and detected numerous terrestrial gamma-ray flashes and discovered that such flashes can produce 100 trillion positrons, far more than scientists had previously expected.
GRB 130427A
On 27 April 2013, Fermi detected GRB 130427A, a gamma-ray burst with one of the highest energy outputs yet recorded.
This included detection of a gamma-ray over 94 billion electron volts (GeV). This broke Fermi's previous record detection, by over three times the amount.
GRB coincident with gravitational wave event GW150914
Fermi reported that its GBM instrument detected a weak gamma-ray burst above 50 keV, starting 0.4 seconds after the LIGO event and with a positional uncertainty region overlapping that of the LIGO observation. The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However, observations from the INTEGRAL telescope's all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, concluding that "this limit excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer." If the signal observed by the Fermi GBM was associated with GW150914, SPI-ACS would have detected it with a significance of 15 sigma above the background. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis of the Fermi report by an independent group, released in June 2016, purported to identify statistical flaws in the initial analysis, concluding that the observation was consistent with a statistical fluctuation or an Earth albedo transient on a 1-second timescale. A rebuttal of this follow-up analysis, however, pointed out that the independent group misrepresented the analysis of the original Fermi GBM Team paper and therefore misconstrued the results of the original analysis. The rebuttal reaffirmed that the false coincidence probability is calculated empirically and is not refuted by the independent analysis.
In October 2018, astronomers reported that GRB 150101B, 1.7 billion light years away from Earth, may be analogous to the historic GW170817. It was detected on 1 January 2015 at 15:23:35 UT by the Gamma-ray Burst Monitor on board the Fermi Gamma-ray Space Telescope, along with detections by the Burst Alert Telescope (BAT) on board the Swift Observatory Satellite.
Black hole mergers of the type thought to have produced the gravitational wave event are not expected to produce gamma-ray bursts, as stellar-mass black hole binaries are not expected to have large amounts of orbiting matter. Avi Loeb has theorised that if a massive star is rapidly rotating, the centrifugal force produced during its collapse will lead to the formation of a rotating bar that breaks into two dense clumps of matter with a dumbbell configuration that becomes a black hole binary, and at the end of the star's collapse it triggers a gamma-ray burst. Loeb suggests that the 0.4 second delay is the time it took the gamma-ray burst to cross the star, relative to the gravitational waves.
GRB 170817A signals a multi-messenger transient
On 17 August 2017, Fermi Gamma-Ray Burst Monitor software detected, classified, and localized a gamma-ray burst which was later designated as GRB 170817A. Six minutes later, a single detector at Hanford LIGO registered a gravitational-wave candidate which was consistent with a binary neutron star merger, occurring 2 seconds before the GRB 170817A event. This observation was "the first joint detection of gravitational and electromagnetic radiation from a single source".
Instruments
Gamma-ray Burst Monitor
The Gamma-ray Burst Monitor (GBM) (formerly GLAST Burst Monitor) detects sudden flares of gamma-rays produced by gamma ray bursts and solar flares. Its scintillators are on the sides of the spacecraft to view all of the sky which is not blocked by the Earth. The design is optimized for good resolution in time and photon energy, and is sensitive from (a medium X-ray) to (a medium-energy gamma-ray).
"Gamma-ray bursts are so bright we can see them from billions of light-years away, which means they occurred billions of years ago, and we see them as they looked then", stated Charles Meegan of NASA's Marshall Space Flight Center.
The Gamma-ray Burst Monitor has detected gamma rays from positrons generated in powerful thunderstorms.
Large Area Telescope
The Large Area Telescope (LAT) detects individual gamma rays using technology similar to that used in terrestrial particle accelerators. Photons hit thin metal sheets, converting to electron-positron pairs, via a process termed pair production. These charged particles pass through interleaved layers of silicon microstrip detectors, causing ionization which produce detectable tiny pulses of electric charge. Researchers can combine information from several layers of this tracker to determine the path of the particles. After passing through the tracker, the particles enter the calorimeter, which consists of a stack of caesium iodide scintillator crystals to measure the total energy of the particles. The LAT's field of view is large, about 20% of the sky. The resolution of its images is modest by astronomical standards, a few arc minutes for the highest-energy photons and about 3 degrees at 100 MeV. It is sensitive from to (from medium up to some very-high-energy gamma rays). The LAT is a bigger and better successor to the EGRET instrument on NASA's Compton Gamma Ray Observatory satellite in the 1990s. Several countries produced the components of the LAT, who then sent the components for assembly at SLAC National Accelerator Laboratory. SLAC also hosts the LAT Instrument Science Operations Center, which supports the operation of the LAT during the Fermi mission for the LAT scientific collaboration and for NASA.
Education and public outreach
Education and public outreach are important components of the Fermi project. The main Fermi education and public outreach website at http://glast.sonoma.edu offers gateways to resources for students, educators, scientists, and the public. NASA's Education and Public Outreach (E/PO) group operates the Fermi education and outreach resources at Sonoma State University.
Rossi Prize
The 2011 Bruno Rossi Prize was awarded to Bill Atwood, Peter Michelson and the Fermi LAT team "for enabling, through the development of the Large Area Telescope, new insights into neutron stars, supernova remnants, cosmic rays, binary systems, active galactic nuclei and gamma-ray bursts."
In 2013, the prize was awarded to Roger W. Romani of Leland Stanford Junior University and Alice Harding of Goddard Space Flight Center for their work in developing the theoretical framework underpinning the many exciting pulsar results from Fermi Gamma-ray Space Telescope.
The 2014 prize went to Tracy Slatyer, Douglas Finkeiner and Meng Su "for their discovery, in gamma rays, of the large unanticipated Galactic structure called the Fermi bubbles."
The 2018 prize was awarded to Colleen Wilson-Hodge and the Fermi GBM team for the detection of , the first unambiguous and completely independent discovery of an electromagnetic counterpart to a gravitational wave signal (GW170817) that "confirmed that short gamma-ray bursts are produced by binary neutron star mergers and enabled a global multi-wavelength follow-up campaign."
See also
Galactic Center GeV excess
GRB 160625B
List of gamma-ray bursts
eROSITA
References
External links
Fermi website at NASA.gov
Fermi website by NASA's Goddard Space Flight Center
Fermi website at Sonoma.edu
Large Area Telescope website at Stanford.edu
Large Area Telescope publications
Gamma-ray Burst Monitor website by NASA's Marshall Space Flight Center
Gamma-ray Burst Monitor publications
Astrophysics
Sonoma State University
Space telescopes
Gamma-ray telescopes
Spacecraft launched in 2008
Spacecraft launched by Delta II rockets
Articles containing video clips
CERN experiments |
66308802 | https://en.wikipedia.org/wiki/DEC%20MICA | DEC MICA | MICA was the codename of the operating system developed for the DEC PRISM architecture. MICA was designed by a team at Digital Equipment Corporation led by Dave Cutler. MICA's design was driven by Digital's need to provide a migration path to PRISM for Digital's VAX/VMS customers, as well as allowing PRISM systems to compete in the increasingly important Unix market. MICA attempted to address these requirements by implementing VMS and ULTRIX user interfaces on top of a common kernel that could support the system calls (or "system services" in VMS parlance), libraries and utilities needed for both environments.
MICA was cancelled in 1988 along with the PRISM architecture, before either project was complete. MICA is most notable for inspiring the design of Windows NT. When the PRISM architecture evolved into the DEC Alpha architecture, Digital opted to port OSF/1 and VMS to Alpha instead of reusing MICA.
Design goals
The original goal for MICA was that all applications would have full and interchangeable access to both the VMS and ULTRIX interfaces, and that a user could choose to log in to a ULTRIX or VMS environment, and run any MICA application from either environment. However, it proved to be impossible to provide both full ULTRIX and full VMS compatibility to the same application at the same time, and Digital scrapped this plan in favour of having a separate Unix operating system based on OSF/1 (this was variously referred to as PRISM ULTRIX or OZIX). As a result, MICA would have served as a portable implementation of a VMS-like operating system, with compatible implementations of DCL, RMS, Files-11, VAXclusters, and the VAX/VMS RTLs and system services. Proposals were made for reinstating Unix compatibility in MICA on a per-application basis so that a MICA application could be compiled and linked against the VMS interfaces, or the ULTRIX interfaces, but not both simultaneously.
Due to scheduling concerns, the first PRISM systems would have been delivered with restricted subsets of the full MICA operating system. This included systems such as Cheyenne and Glacier which were dedicated to running specific applications, and where direct interaction with the operating system by customers would be limited.
Programming
MICA was to be written almost entirely in a high-level programming language named PILLAR. PILLAR evolved from EPascal (the VAXELN-specific dialect of Pascal) via an interim language called the Systems Implementation Language (SIL). PILLAR would have been backported to VAX/VMS, allowing applications to be developed that could be compiled for both VAX/VMS and MICA. A common set of high-level runtime libraries named ARUS (Application Runtime Utility Services) would have further facilitated portability between MICA, OSF/1, VAX/VMS and ULTRIX. As part of the PRISM project, a common optimizing compiler backend named GEM was developed (this survived and became the compiler backend for the Alpha and Itanium ports of VMS, as well as Tru64).
In addition to PILLAR, Mica provided first-class support for ANSI C in order to support Unix applications. An assembler named SPASM (Simplified PRISM Assembler) was intended for the small amount of assembly code needed for the operating system, and would not have been made generally available in order to dissuade customers from developing non-portable software. Similarly, an implementation of BLISS was developed for internal use only, in order to allow pre-existing VAX/VMS applications to be ported to MICA. MICA would have featured ports or rewrites of many VAX/VMS layered products, including Rdb, VAXset, DECwindows, and most of the compilers available for VAX/VMS.
Legacy
When PRISM and MICA were cancelled, Dave Cutler left Digital for Microsoft, where he was put in charge of the development of what became known as Windows NT. Cutler's architecture for NT was heavily inspired by many aspects of MICA. In addition to the implementation of multiple operating system APIs on top of a common kernel (Win32, OS/2 and POSIX in NT's case) MICA and NT shared the separation of the kernel from the executive, the use of an Object Manager as the abstraction for interfacing with operating system data structures, and support for multithreading and symmetric multiprocessing.
After the cancellation of PRISM, Digital began a project to produce a faster VAX implementation which could run VMS and provide comparable performance to its DECstation line of Unix systems. When these attempts failed, the design group concluded that VMS itself could be ported to a PRISM-like architecture. This led to the DEC Alpha architecture, and the Alpha port of VMS.
References
Digital Equipment Corporation
DEC operating systems
Proprietary operating systems
Time-sharing operating systems |
49338845 | https://en.wikipedia.org/wiki/YemenSoft%20Inc | YemenSoft Inc | YemenSoft Inc. is a Yemeni software company headquartered in Sana’a, Yemen. The company is known for its creation of the enterprise software to manage business plans, operations, contracts and customer relations. It is headquartered in Sana’a, with regional offices in the Middle East, North America and Africa. As of 2015, YemenSoft solutions are being used by more than 11,000 clients in more than 14 countries.
History
YemenSoft was founded in Sana’a, Yemen by Ali Alyousify in 1993. Ali Alyousify began to seek for the opportunity in the enterprise performance market that provides customized developed software for small businesses, institutions and individual offices. He then launched Yemen soft as a software development company provides software to various sectors. The company announced their first financial system, Al-Mohaseb1 as an open source software.
After the completing World Bank project in Yemen, YemenSoft started expanding in the Middle east and North Africa. It also initiated its first office in Jeddah, Saudi Arabia in 2009 and Cairo, Egypt in 2010 after enhancing their services through new solutions "Motakamel plus and Onyx Pro". Ultimate Solution Inc. is the official distributor in Saudi Arabia and is a sister company.
Products
Onyx Pro supports back office operations for large organization which includes financial, human resources, orders, manufacturing, inventory, shipping and billing.
Motakamel Plus supports back office operations for Medium and small which includes financial, human resources, inventory, shipping and billing.
AlMohasib1 is a free supported financial application which can be downloaded and used.
Recognition
In 2012 YemenSoft was listed by RED HERRING as one of the 100 companies around the world that provide products according to international standards and "clear future vision". In 2013 YemenSoft was honored as the "best company of software development" by Investor Corporation, General Investment Authority and Commercial Industrial Chambers Union.
Partnership
In 2002, YemenSoft was chosen by the World Bank for the Yemeni Government Computerization project in partnership with Synerma and Computer Engineering World.
References
Companies of Yemen |
20588127 | https://en.wikipedia.org/wiki/SPSS%20Modeler | SPSS Modeler | IBM SPSS Modeler is a data mining and text analytics software application from IBM. It is used to build predictive models and conduct other analytic tasks. It has a visual interface which allows users to leverage statistical and data mining algorithms without programming.
One of its main aims from the outset was to get rid of unnecessary complexity in data transformations, and to make complex predictive models very easy to use.
The first version incorporated decision trees (ID3), and neural networks (backprop), which could both be trained without underlying knowledge of how those techniques worked.
IBM SPSS Modeler was originally named Clementine by its creators, Integral Solutions Limited. This name continued for a while after SPSS's acquisition of the product. SPSS later changed the name to SPSS Clementine, and then later to PASW Modeler. Following IBM's 2009 acquisition of SPSS, the product was renamed IBM SPSS Modeler, its current name.
Applications
SPSS Modeler has been used in these and other industries:
Customer analytics and Customer relationship management (CRM)
Fraud detection and prevention
Optimizing insurance claims
Risk management
Manufacturing quality improvement
Healthcare quality improvement
Forecasting demand or sales
Law enforcement and border security
Education
Telecommunications
Entertainment: e.g., predicting movie box office receipts
Editions
IBM sells the current version of SPSS Modeler (version 18.2.1) in two separate bundles of features. These two bundles are called "editions" by IBM:
SPSS Modeler Professional: used for structured data, such as databases, mainframe data systems, flat files or BI systems
SPSS Modeler Premium: Includes all the features of Modeler Professional, with the addition of:
– Text analytics
Both editions are available in desktop and server configurations.
In addition to the traditional IBM SPSS Modeler desktop installations, IBM now offers the SPSS Modeler interface as an option in the Watson Studio product line which includes Watson Studio (cloud), Watson Studio Local, and Watson Studio Desktop.
Watson Studio Desktop documentation: https://www.ibm.com/support/knowledgecenter/SSBFT6_1.1.0/mstmap/kc_welcome.html
Release history
Clementine 1.0 – June 1994 by ISL
Clementine 5.1 – Jan 2000
Clementine 12.0 – Jan 2008
PASW Modeler 13 (formerly Clementine) – April 2009
IBM SPSS Modeler 14.0 – 2010
IBM SPSS Modeler 14.2 – 2011
IBM SPSS Modeler 15.0 – June 2012
IBM SPSS Modeler 16.0 – December 2013
IBM SPSS Modeler 17.0 – March 2015
IBM SPSS Modeler 18.0 -- March 2016
IBM SPSS Modeler 18.1 -- June 2017
IBM SPSS Modeler 18.2 -- March 2019
Product history
Early versions of the software were called Clementine and were Unix-based. The first version was released on Jun 9th 1994, after Beta testing at 6 customer sites. Clementine was originally developed by a UK company named Integral Solutions Limited (ISL), in Collaboration with Artificial Intelligence researchers at Sussex University. The original Clementine was implemented in Poplog, which ISL marketed for Sussex University.
Clementine mainly used the Poplog languages, Pop11, with some parts written in C for speed (such as the neural network engine), along with additional tools provided as part of Solaris, VMS and various versions of Unix. The tool quickly garnered the attention of the data mining community (at that time in its infancy).
In order to reach a larger market, ISL then Ported Poplog to Microsoft Windows using the NutCracker package, later named MKS Toolkit to provide the Unix graphical facilities. Original in many respects, Clementine was the first data mining tool to use an icon based Graphical user interface rather than requiring users to write in a Programming language, though that option remained available for expert users.
In 1998 ISL was acquired by SPSS Inc., who saw the potential for extended development as a commercial data mining tool. In early 2000 the software was developed into a client / server architecture, and shortly afterward the client front-end interface component was completely re-written and replaced with a new Java front-end, which allowed deeper integration with the other tools provided by SPSS.
SPSS Clementine version 7.0: The client front-end runs under Windows. The server back-end Unix variants (Sun, HP-UX, AIX), Linux, and Windows. The graphical user interface is written in Java.
IBM SPSS Modeler 14.0 was the first release of Modeler by IBM.
IBM SPSS Modeler 15, released in June 2012, introduced significant new functionality for Social Network Analysis and Entity Analytics.
See also
IBM SPSS Statistics
List of statistical packages
Cross Industry Standard Process for Data Mining
References
Further reading
External links
SPSS Modeler 18.2.1 Documentation
Users Guide – SPSS Modeler 18.2.1
IBM SPSS Modeler website
IBM SPSS Modeler online from cloud
Data mining and machine learning software
Proprietary commercial software for Linux
Artificial intelligence |
37590035 | https://en.wikipedia.org/wiki/READ%20180 | READ 180 | READ 180 is a reading intervention program, utilizing adaptive technology, in wide use by students in Grades 4–12 who read at least two years below grade level. It was created by Scholastic Corporation. In 2011, Scholastic released its newest version, READ 180 Next Generation, which has been fully aligned to meet the demands of the Common Core State Standards Initiative. Scholastic sold READ 180 to Houghton Mifflin Harcourt in 2015.
READ 180 is based on a blended instructional model that includes whole-group instruction and three small-group rotations, adaptive software, differentiated instruction, and independent reading.
The program has three different versions: Upper Elementary (Grades 4–5), Middle School (Grades 6-8), and High School (Grades 9–12).
Placement
The Scholastic Reading Inventory (SRI) is a technology-based universal screener and progress monitor. SRI is used to generate a Lexile, or readability level, for each student. The purpose of administering the SRI is to determine if the student is a candidate for intervention. SRI is software that “assesses students’ reading levels, tracks students’ growth over time, and helps guide instruction according to students’ needs.”
READ 180 is a reading intervention program that provides individualized instruction to meet each student’s reading needs. The technology collects data based on individual responses and adjusts instruction to meet each students’ needs at their level, accelerating their path to reading mastery.
Teachers begin and end each class session with whole-group instruction. Next students break into one of three rotations. First, the teacher leads small-group instruction, using the READ 180 worktext called the rBook, teachers monitor reading and differentiate instruction based on students’ needs. Second, students work independently in the READ 180 software. The software leads students through five Learning Zones: the Reading Zone, the Word Zone, the Spelling Zone, the Success Zone, and the Writing Zone. Third, the students read independently in Independent Reading. Students select from the READ 180 paperback or audiobook library and read a fiction or nonfiction book (or eRead).
History
READ 180 was founded in 1985 by Ted Hasselbring and members of the Cognition and Technology Group at Vanderbilt University. With a grant from the United States Department of Education’s Office of Special Education, Dr. Hasselbring developed software that used student performance data to individualize and differentiate the path of computerized reading instruction. This software became the prototype for the READ 180 program.
Between 1994 and 1998, Dr. Hasselbring and his team put their work to the test in Orange County, Florida. The Orange County Literacy Project used this READ 180 prototype with more than 10,000 struggling students. The dramatic results Orange County public schools experienced were documented in the Journal of Research on Educational Effectiveness. These results led Scholastic to partner with Orange County public schools and Vanderbilt University to license the software, and to launch READ 180.
After the initial launch of READ 180, Scholastic released Enterprise Edition in 2006 in collaboration with Dr. Kevin Feldman and Dr. Kate Kinsella. READ 180 Enterprise Edition featured the READ 180 rBook, structured engagement routines for English language learners, and the Scholastic Achievement Manager (SAM).
Reports
A number of studies have been conducted regarding the effectiveness of using READ 180 in the classroom. The company's webpage includes a searchable list of research articles.
Below is a sample of some of the current research available on READ 180.
The U.S. Department of Education Striving Readers Project shows READ 180 effective in combating adolescent illiteracy.
The Institute for Educational Science (IES) What Works Clearinghouse recognized READ 180 for potentially positive effects in comprehension and general literacy achievement.
Slavin, Cheung, and Groff, and Lake (2008) placed READ 180 in a select group of adolescent literacy programs that showed more evidence of effectiveness than 121 other programs reviewed.
Harty, Fitzgerald, and Porter (2008) indicated that READ 180 can be successfully implemented in an afterschool setting.
Lang, Torgesen, Vogel, Chanter, Lefsky, and Petscher (2009) published a study which indicated that ninth-grade students enrolled in READ 180 exceeded the benchmark for expected yearly growth on the Florida Comprehensive Assessment Test.
De La Paz (1997) documented the foundational research conducted by Dr. Ted Hasselbring and his team from Peabody College at Vanderbilt University.
References
Slavin, R. E., Cheung, A., Groff, C., & Lake, C. (2008). Effective reading programs for middle and high schools: A best evidence synthesis. Reading Research Quarterly, 43 (3), 290–322.
Harty, Fitzgerald, & Porter. (2008). Implementing a Structured reading program in an afterschool setting: Problems and potential solutions. Harvard Educational Review.
Lang, L., Torgesen, J. K., Vogel, W., Chanter, C., Lefsky, E., & Petscher, Y. (2009). Exploring the relative effectiveness of reading interventions for high school students. Journal of Research on Educational Effectiveness, 2: 149–175.
De La Paz, S. (1997). Managing cognitive demands for writing: Comparing the effects of instructional components in strategy instruction. Reading and Writing Quarterly, 23, 249–266.
Learning to read
Curricula
Scholastic Corporation |
6305906 | https://en.wikipedia.org/wiki/Archimedes%20%28CAD%29 | Archimedes (CAD) | Archimedes – "The Open CAD" – (also called Arquimedes) is a computer-aided design (CAD) program being developed with direct input from architects and architecture firms. With this design philosophy, the developers hope to create software better suited for architecture than the currently widely used AutoCAD, and other available CAD software. The program is free software released under the Eclipse Public License.
Features
Basic drawing
Lines, Polylines, Arcs and Circles.
Editable Text
Explode
Offset
Advanced CAD functions
Trimming
Filleting
Area measurement
Miscellaneous
Autosave
SVG export
PDF export
English, Portuguese, and Italian language support
Integration with other CAD systems
Archimedes uses its own XML-based open format, which resembles SVG. It does not yet include support for other CAD formats, but DXF support is planned.
Development
Archimedes is written in Java, and the latest version runs on Windows, Mac OS X, Linux/Unix based systems, and might run on platforms that have are supported by LWJGL and a Java Virtual machine on version 1.5.0 or later.
History
The Archimedes Project started as a collaboration between a group of programmers and architecture students at the University of São Paulo, in Brazil, in 2005. The project is currently being worked on as free and open source software. There is a team of students from the University working on it as collaborators under the coordination of Hugo (project leader) but everyone is free to contribute with plugins and/or patches.
Timeline
Archimedes was registered as a SourceForge.net project on 12 July 2005.
The last stable pre-RCP version was 0.16.0, released on 25 October 2000
.
The first stable version after the RCP migration was 0.50.0, released on 25 April 2007.
The latest stable version is 0.66.1, which was released on 30 May 2012.
Migration to Eclipse RCP in version 0.5x
A migration to the Eclipse Rich Client Platform in versions 0.5x has greatly improved the user interface model and stability, but some of the functionality from the last pre-RCP version is still being transferred. Version 0.58.0 moved this process a step closer by adding trimming, leader, svg and pdf exporting.
External links
Archimedes Home Page
Archimedes on SourceForge.net
Hugo Corbucci's Blog (Archimedes Project Lead)
Github Repository
References
Computer-aided design software for Linux
Free computer-aided design software
Free software programmed in Java (programming language) |
42332604 | https://en.wikipedia.org/wiki/Genius%20Project | Genius Project | Genius Project is project portfolio management software (PPM). The product includes a KPI module, Gantt charting, support for project requests and help desk/trouble tickets as work items, support for scrum, configurable views, meeting management, project output tracking, and the Genius Live social collaboration platform.
History
Genius Project is developed by Genius Inside. It was first released in 1997 and is available in English, German, Spanish and French. It is project management software used by any type of company, in any business. The product has evolved from a simple project management solution to a full suite of enterprise project and portfolio management application, offered in both software as a service and on-premises deployment options built on IBM middleware.
Competition
Severa
Doolphy
Goodwerp
See also
Comparison of project management software
Project management software
Awards
Winner Innovationspreis-IT 2014 in the On-Demand“
Winner of the Silver and Excellence Award 2014
Winner Silver and Excellence Award 2013
Winner of the IBM Lotus Awards: Best Mid-Market Solution 2008
Innovative Product 2007, Initiative Mittelstand
Nominated for ERP system of the year 2012, category Services, des Jahres, Potsdam Center for Enterprise Research (CER)
References
Project management software
1997 software
Projects established in 1997 |
7962417 | https://en.wikipedia.org/wiki/Event%20%28computing%29 | Event (computing) | In programming and software design, an event is an action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Computer events can be generated or triggered by the system, by the user, or in other ways. Typically, events are handled synchronously with the program flow; that is, the software may have one or more dedicated places where events are handled, frequently an event loop. A source of events includes the user, who may interact with the software through the computer's peripherals - for example, by typing on the keyboard. Another source is a hardware device such as a timer. Software can also trigger its own set of events into the event loop, e.g. to communicate the completion of a task. Software that changes its behavior in response to events is said to be event-driven, often with the goal of being interactive.
Description
Event driven systems are typically used when there is some asynchronous external activity that needs to be handled by a program; for example, a user who presses a button on their mouse. An event driven system typically runs an event loop, that keeps waiting for such activities, e.g. input from devices or internal alarms. When one of these occurs, it collects data about the event and dispatches the event to the event handler software that will deal with it.
A program can choose to ignore events, and there may be libraries to dispatch an event to multiple handlers that may be programmed to listen for a particular event. The data associated with an event at a minimum specifies what type of event it is, but may include other information such as when it occurred, who or what caused it to occur, and extra data provided by the event source to the handler about how the event should be processed.
Events are typically used in user interfaces, where actions in the outside world (mouse clicks, window-resizing, keyboard presses, messages from other programs, etc.) are handled by the program as a series of events. Programs written for many windowing environments consist predominantly of event handlers.
Events can also be used at instruction set level, where they complement interrupts. Compared to interrupts, events are normally implemented synchronously: the program explicitly waits for an event to be generated and handled (typically by calling an instruction that dispatches the next event), whereas an interrupt can demand immediate service.
Delegate event model
A common variant in object-oriented programming is the delegate event model, which is provided by some graphic user interfaces. This model is based on three entities:
a control, which is the event source
listeners, also called event handlers, that receive the event notification from the source
interfaces (in the broader meaning of the term) that describe the protocol by which the event is to be communicated.
Furthermore, the model requires that:
every listener must implement the interface for the event it wants to listen to
every listener must register with the source to declare its desire to listen to the event
every time the source generates the event, it communicates it to the registered listeners, following the protocol of the interface.
C# uses events as special delegates that can only be fired by the class that declares it. This allows for better abstraction, for example:
delegate void Notifier (string sender);
class Model {
public event Notifier notifyViews;
public void Change() { ... notifyViews("Model"); }
}
class View1 {
public View1(Model m) {
m.notifyViews += new Notifier(this.Update1);
}
void Update1(string sender) {
Console.WriteLine(sender + " was changed during update");
}
}
class View2 {
public View2(Model m) {
m.notifyViews += new Notifier(this.Update2);
}
void Update2(string sender) {
Console.WriteLine(sender + " was changed");
}
}
class Test {
static void Main() {
Model model = new Model();
new View1(model);
new View2(model);
model.Change();
}
}
Event handler
In computer programming, an event handler is a callback subroutine that handles inputs received in a program (called a listener in Java and JavaScript). Each event is a piece of application-level information from the underlying framework, typically the GUI toolkit. GUI events include key presses, mouse movement, action selections, and timers expiring. On a lower level, events can represent availability of new data for reading a file or network stream. Event handlers are a central concept in event-driven programming.
The events are created by the framework based on interpreting lower-level inputs, which may be lower-level events themselves. For example, mouse movements and clicks are interpreted as menu selections. The events initially originate from actions on the operating system level, such as interrupts generated by hardware devices, software interrupt instructions, or state changes in polling. On this level, interrupt handlers and signal handlers correspond to event handlers.
Created events are first processed by an event dispatcher within the framework. It typically manages the associations between events and event handlers, and may queue event handlers or events for later processing. Event dispatchers may call event handlers directly, or wait for events to be dequeued with information about the handler to be executed.
Event notification
Event notification is a term used in conjunction with communications software for linking applications that generate small messages (the "events") to applications that monitor the associated conditions and may take actions triggered by events.
Event notification is an important feature in modern database systems (used to inform applications when conditions they are watching for have occurred), modern operating systems (used to inform applications when they should take some action, such as refreshing a window), and modern distributed systems, where the producer of an event might be on a different machine than the consumer, or consumers. Event notification platforms are normally designed so that the application producing events does not need to know which applications will consume them, or even how many applications will monitor the event stream.
It is sometimes used as a synonym for publish-subscribe, a term that relates to one class of products supporting event notification in networked settings. The virtual synchrony model is sometimes used to endow event notification systems, and publish-subscribe systems, with stronger fault-tolerance and consistency guarantees.
User-generated events
There are a large number of situations or events that a program or system may generate or respond to. Some common user generated events include:
Mouse events
A pointing device can generate a number of software recognisable pointing device gestures. A mouse can generate a number of mouse events, such as mouse move (including direction of move and distance), mouse left/right button up/down and mouse wheel motion, or a combination of these gestures. For example, double-clicks commonly select words and characters within boundary, and triple-clicks entire paragraphs.
Keyboard events
Pressing a key on a keyboard or a combination of keys generates a keyboard event, enabling the program currently running to respond to the introduced data such as which key/s the user pressed.
Joystick events
Moving a joystick generates an X-Y analogue signal. They often have multiple buttons to trigger events. Some gamepads for popular game boxes use joysticks.
Touchscreen events
The events generated using a touchscreen are commonly referred to as touch events or gestures.
Device events
Device events include action by or to a device, such as a shake, tilt, rotation, move etc.
See also
Callback (computer programming)
Database trigger
DOM events
Event-driven programming
Exception handling
Interrupt handler
Interrupts
Observer pattern (e.g., Event listener)
Reactor pattern vs. Proactor pattern
Signal programming
Virtual synchrony
References
External links
Article Event Handlers and Callback Functions
A High Level Design of the Sub-Farm Event Handler
An Events Syntax for XML
Distributed Events and Notifications
Event order
Javadoc documentation
Java package Javadoc API documentation
Java package Javadoc API documentation
Write an Event Handler
Computer programming
Subroutines |
10105613 | https://en.wikipedia.org/wiki/CobraNet | CobraNet | CobraNet is a combination of software, hardware, and network protocols designed to deliver uncompressed, multi-channel, low-latency digital audio over a standard Ethernet network. Developed in the 1990s, CobraNet is widely regarded as the first commercially successful audio-over-Ethernet implementation.
CobraNet was designed for and is primarily used in large commercial audio installations such as convention centers, stadiums, airports, theme parks, and concert halls. It has applications where a large number of audio channels must be transmitted over long distances or to multiple locations.
CobraNet is an alternative to analog audio, which suffers from signal degradation over long cable runs due to electromagnetic interference, high-frequency attenuation, and voltage drop. Additionally, the use of digital multiplexing allows audio to be transmitted using less cabling than analog audio.
History
CobraNet was developed in 1996 by Boulder, Colorado-based Peak Audio. Initial demonstrations were of a 10 Mbit/s point-to-point system with limited channel capacity. The first permanent installation of CobraNet in this early form was to provide background music throughout Disney's Animal Kingdom theme park. The first commercial use of CobraNet was during the halftime show at Super Bowl XXXI in 1997.
CobraNet was first introduced as an interoperable standard in collaboration with manufacturer QSC Audio Products. QSC was the first to license the technology from Peak Audio and marketed it under the RAVE brand. At this point CobraNet had graduated to fast Ethernet and used a unique collision avoidance technique to carry up to 64 channels per Ethernet collision domain.
CobraNet was subsequently enhanced to support and eventually require a switched Ethernet network. An SNMP agent was added for remote control and monitoring. Support for higher sample rates, increased bit resolutions and lowered latency capabilities were later introduced in an incremental and backward-compatible manner.
In May 2001, Cirrus Logic announced that it had acquired the assets of Peak Audio. Leveraging Cirrus DSP technology, a low-cost SoC implementation of CobraNet was developed and marketed.
Advantages and disadvantages
Advantages
Using CobraNet and fast Ethernet, 64 channels of uncompressed digital audio are carried through a single category 5 cable. Using gigabit or fiber optic Ethernet variants, the cost of cabling per audio channel is reduced further compared to the fast Ethernet implementation. CobraNet data can coexist with data traffic over existing Ethernet networks so a single network infrastructure can serve audio distribution and other networking needs.
Audio routing can be changed at any time with network commands, and does not require rewiring.
Audio is transmitted in digital form, and provides reduced susceptibility to electromagnetic interference, crosstalk, coloration, and attenuation owing to cable impedance.
Use of Ethernet by CobraNet offers many high availability features such as Spanning Tree Protocol, link aggregation, and network management. For critical applications, CobraNet devices can be wired with redundant connections to the network. In this configuration, if one CobraNet device, cable, or Ethernet switch fails, the other takes over almost immediately.
Disadvantages
Delays over the CobraNet transmission medium itself are at least milliseconds per network traversal. For some applications, these delays can be unacceptable – especially when combined with further delays resulting from propagation time, digital signal processing and the conversions between analog and digital. Also, licensing the technology or purchasing the required CobraNet interfaces, which encode and decode the CobraNet signal, can be expensive.
Transmission
CobraNet is transmitted using standard Ethernet packets. Instead of using TCP/IP packets, CobraNet transfers data using data link layer packets, which travel quickly through hubs, bridges and switches, and are not as susceptible to the latency and QoS problems commonly found in streaming protocols using a higher transport layer. However, since CobraNet does not use IP protocol, its packets cannot travel through routers, and therefore it is limited to use on a LAN; CobraNet cannot be used over the Internet. The network over which CobraNet is transmitted must be able to operate at a minimum of . All CobraNet packets are identified with a unique Ethernet protocol identifier (0x8819) assigned to Cirrus Logic.
CobraNet is not designed to work over wireless networks. Bandwidth and reliability issues associated with typical 802.11 wireless networks tend to cause frequent dropouts and errors. However, wireless communication of CobraNet data can be accomplished reliably using lasers.
Channels and bundles
CobraNet data is organized into channels and bundles. A typical CobraNet signal can contain up to 4 bundles of audio traveling in each direction, for a total of 8 bundles per device. Each bundle houses up to 8 channels of 48 kHz, 20-bit audio, for a total capacity of 64 channels. CobraNet is somewhat scalable, in that channel capacity increases when 16-bit audio is used, and channel capacity decreases when 24-bit audio is used. The number of channels allowed per bundle is limited by the 1,500-byte Ethernet MTU.
There are three types of bundles: multicast, unicast, and private:
Multicast bundles are sent from one CobraNet device to all other CobraNet devices in the network using Ethernet multicast addressing. Each CobraNet device individually determines if it will use the bundle or discard it. Therefore, multicast bundles are more bandwidth-intensive than other bundle types. Bundle numbers 1–255 are reserved for multicast bundles.
Unicast bundles are sent from one CobraNet device to any other device or devices configured to receive the bundle number. Unicast bundles are much more efficient because network switches route them only to devices which actually want to receive them. Despite their name, unicast bundles may still be sent to multiple devices, either by transmitting multiple copies of the audio data or using multicast addressing. Bundle numbers 256–65279 are reserved for unicast bundles.
Private bundles may be sent with unicast or multicast addressing. Bundle numbers 65280–65535 are reserved for private bundles. Private bundle numbers are paired with the MAC address of the device that transmits them. To receive a private bundle, both the bundle number and the MAC address of the transmitter must be specified. Because 256 private bundles available to each transmitter, there is no limit on the total number private bundles on a network.
As long as multicast bundles are used sparingly, it is virtually impossible to exceed the bandwidth of a 100 Mbit network with CobraNet data. However, there are limitations to the maximum number of bundles that can be sent on a network, since the conductor must include data in its beat packets for every bundle on the network, and the beat packet is limited to 1,500 bytes. If each device is transmitting one bundle, there may be up to 184 transmitters active simultaneously (for a total of 184 bundles). If each device is transmitting four bundles, then only 105 transmitters can be active, although they would be producing a total of 421 active bundles. The use of private bundles does not require any additional data in the beat packet, so these network limitations can be sidestepped by using private bundles.
Synchronization
The CobraNet network is synchronized to a single CobraNet device known as the conductor. A conductor priority can be configured to influence the selection of the conductor. Among devices with the same conductor priority, the first to establish itself on the network becomes is elected conductor. All other devices are known as performers. In the event that the conductor fails, another CobraNet device will be chosen to become the conductor within milliseconds. CobraNet cannot function without a conductor.
Packets
Four main types of packet are used in the transmission and synchronization of CobraNet:
Beat packets – the conductor outputs a beat packet to all other CobraNet devices on the network at a rate of 750 packets per second. All other CobraNet devices on the network synchronize their audio clock and their data transmissions to the beat packet. The beat packet contains network operating parameters, clock data and transmission permissions for multicast and unicast bundles.
Audio packets – also known as isochronous data packets, these packets are sent out by all CobraNet devices after they receive a beat packet. At standard latency settings, one audio packet is sent for each beat packet received, and each audio packet includes 64 samples of audio data per channel. At lower latency settings, audio packets may be sent twice or four times for each beat packet received. Bundles do not share packets; separate packets are sent in sequence for each bundle transmitted from the same device.
Reservation packets – these packets are transmitted as needed or typically once per second at minimum. Their function is to control bandwidth allocation, initiate connections between CobraNet devices, and monitor the status of CobraNet devices.
Serial bridge packets – asynchronous serial data may be sent between CobraNet devices on the same network. Many standard asynchronous serial formats are supported, including RS-232, RS-422, RS-485 and MIDI.
Latency
The buffering and transmission of audio data in Ethernet packets typically incurs a delay of 256 samples or milliseconds. Additional delays are introduced through analog-to-digital and digital-to-analog conversion. Latency can be reduced by sending smaller packets more often. In most cases, the programmer can choose the desired CobraNet latency for a particular CobraNet device (, , or milliseconds). However, reducing audio latency has consequences:
Reducing latency requires more processing by the CobraNet interface and may reduce channel capacity.
Reducing latency places additional demands on network performance, and may not be possible in some network configurations if the forwarding delay is too high.
Since reducing latency means sending smaller packets more often, higher resolution (i.e. 96 kHz, 24-bit) audio channels can be sent per bundle without exceeding the 1,500-byte payload limit for Ethernet packets.
It may seem from the Latency vs. Channels per bundle table that more information can be sent at a lower latency. However, that is not the case. More channels can be sent per bundle, but fewer bundles can be processed simultaneously by one device. So, while eight 24-bit, 96 kHz channels can be sent in one bundle at ms latency, due to processing constraints, the CobraNet device may only be able to send and receive one bundle instead of the usual four. The bundle capacity of CobraNet devices are unique to the particular device and are not always the same. The Channels per bundle vs. test case latencies table illustrates the bundle capacity for a Biamp AudiaFLEX-CM DSP device. The Rx and Tx columns indicate the absolute maximum number of channels that can be received or transmitted. The Rx/Tx column represents the maximum number of channels that can be received and transmitted simultaneously.
Hardware and software
CobraNet network cards
CobraNet interfaces come in several varieties, some of which can support more channels than others. Additionally, CobraNet interfaces have two Ethernet ports labelled "primary" and "secondary". Only the primary Ethernet port needs to be connected, but if both ports are connected the latter acts as a fail-safe. Careful network design and topology which takes advantage of this feature can provide extremely high reliability in critical applications.
The typical CobraNet interfaces provided by Cirrus Logic are the CM-1 and the CM-2:
CM-1 – the standard CobraNet card, provides 32 in and 32 out audio channels.
CM-2 – compact, low-power, lower cost design provides 8 or 16 audio channels.
Both cards are designed to be added to audio products by the manufacturer.
Software
Cirrus Logic provides a software application known as CobraCAD, which assists in the design of the network on which the CobraNet system will run. It helps to identify if there are too many routers between two CobraNet devices, if a certain latency is possible given the network configuration and other tasks. However, Cirrus Logic does not provide software to manipulate their hardware. In fact, in the simplest of cases, no software is required by the end user. For instance, a simple breakout box which converts a CobraNet signal to eight analog audio signals would require little or no configuration by the end user apart from possibly selecting the bundle number. If configuration is required (for example, in a DSP box with integrated CobraNet I/O), then the manufacturer of the device typically supplies proprietary software for that purpose.
Devices
One type of device that integrates CobraNet is the audio DSP. As self-powered speakers became more common, Cobranet was frequently used to distribute the audio signal from the DSP. These devices typically receive audio from CobraNet (and often from other digital or analog sources simultaneously), and process the audio using digital filters and effects (for example, volume control, EQ, compression, delay, crossovers, etc.) and then output the audio via CobraNet (or other digital or analog outputs). Some DSPs even have an integral telephone hybrid, and can incorporate CobraNet and other sources into a teleconferencing application.
Amplifiers with integrated CobraNet help keep the signal chain digital for a longer span. Amplifiers with CobraNet inputs may also have limited DSP and network monitoring capabilities.
Loudspeakers with integrated CobraNet help keep the signal chain digital for an even longer span. In a typical unpowered speaker application, the amplifier would be housed far away from the speaker, and a long speaker cable (analog) would be run between the speaker and the amplifier. The speaker cable would be subject to interference and signal loss from electrical resistance. However, a powered speaker, powered by an electrical cable and fitted with integrated CobraNet inputs, eliminates the speaker cable and replaces it with a network cable. Since a speaker will only use one audio channel out of the bundle, many speakers with CobraNet will also have a number of analog outputs for the rest of the channels in the bundle, which is useful in speaker cluster applications.
Many digital mixing consoles are available with optional CobraNet interfaces for increased channel capacity and reduced cabling.
Manufacturers
Manufacturers who wish to integrate CobraNet connectivity into their devices either license the technology or purchase CobraNet interface modules or chips from Cirrus Logic. Many audio equipment manufacturers have included CobraNet in their products. Below is a partial list:
Biamp Systems
Bose Corporation
dbx
Crest Audio
Crown International
D&R Electronica
Dolby Laboratories
EAW
Electro-Voice
JBL
Lab.gruppen
Mackie
Midas Consoles
Peavey MediaMatrix
QSC Audio Products
Rane
Renkus-Heinz
Soundcraft
Symetrix
Yamaha Corporation
See also
Audio Contribution over IP
EtherSound
Dante
Notes
References
External links
CobraNet overview, usage and Frequently Asked Questions
Digital audio
Audio network protocols
Ethernet |
20534511 | https://en.wikipedia.org/wiki/Flash%20file%20system | Flash file system | A flash file system is a file system designed for storing files on flash memory–based storage devices. While flash file systems are closely related to file systems in general, they are optimized for the nature and characteristics of flash memory (such as to avoid write amplification), and for use in particular operating systems.
Overview
While a block device layer can emulate a disk drive so that a general-purpose file system can be used on a flash-based storage device, this is suboptimal for several reasons:
Erasing blocks: flash memory blocks have to be explicitly erased before they can be written to. The time taken to erase blocks can be significant, thus it is beneficial to erase unused blocks while the device is idle.
Random access: general-purpose file systems are optimized to avoid disk seeks whenever possible, due to the high cost of seeking. Flash memory devices impose no seek latency.
Wear leveling: flash memory devices tend to wear out when a single block is repeatedly overwritten; flash file systems are designed to spread out writes evenly.
Log-structured file systems have all the desirable properties for a flash file system. Such file systems include JFFS2 and YAFFS.
Because of the particular characteristics of flash memory, it is best used with either a controller to perform wear leveling and error correction or specifically designed flash file systems, which spread writes over the media and deal with the long erase times of NAND flash blocks. The basic concept behind flash file systems is: when the flash store is to be updated, the file system will write a new copy of the changed data over to a fresh block, remap the file pointers, then erase the old block later when it has time.
In practice, flash file systems are used only for Memory Technology Devices (MTDs), which are embedded flash memories that do not have a controller. Removable flash memory cards and USB flash drives have built-in controllers to manage MTD with dedicated algorithms, like wear leveling, bad block recovery, power loss recovery, garbage collection and error correction, so use of a flash file system has limited benefit.
Flash-based memory devices are becoming more prevalent as the number of mobile devices is increasing, the cost per memory size decreases, and the capacity of flash memory chips increases.
Origins
The earliest flash file system, managing an array of flash as a freely writable disk, was TrueFFS by M-Systems of Israel, presented as a software product in PC-Card Expo at Santa Clara, California, in July 1992 and patented in 1993.
One of the earliest flash file systems was Microsoft's FFS2, for use with MS-DOS, released in autumn 1992. FFS2 was preceded by an earlier product, called "FFS", which however fell short of being a flash file system, managing a flash array as write once read many (WORM) space rather than as a freely writable disk.
Around 1994, the PCMCIA, an industry group, approved the Flash Translation Layer (FTL) specification, based on the design of M-Systems' TrueFFS. The specification was authored and jointly proposed by M-Systems and SCM Microsystems, who also provided the first working implementations of FTL. Endorsed by Intel, FTL became a popular flash file system design in non-PCMCIA media as well.
Linux flash filesystems
JFFS, JFFS2 and YAFFS
JFFS was the first flash-specific file system for Linux, but it was quickly superseded by JFFS2, originally developed for NOR flash. Then YAFFS was released in 2002, dealing specifically with NAND flash, and JFFS2 was updated to support NAND flash too.
UBIFS
UBIFS has been merged since Linux 2.6.22 in 2008. UBIFS has been actively developed from its initial merge. UBIFS has documentation hosted at infradead.org along with JFFS2 and MTD drivers. Some initial comparison show UBIFS with compression faster than F2FS.
LogFS
LogFS, another Linux flash-specific file system, is being developed to address the scalability issues of JFFS2.
F2FS
F2FS (Flash-Friendly File System) was added to the Linux kernel 3.8. Instead of being targeted at speaking directly to raw flash devices, F2FS is designed to be used on flash-based storage devices that already include a flash translation layer, such as SD cards.
Union filesystems
Overlayfs, Unionfs, and aufs are union filesystems, that allow multiple filesystems to be combined and presented to the user as a single tree. This allows the system designer to place parts of the operating system that are nominally read-only on different media to the normal read-write areas. OpenWrt is usually installed on raw flash chips without FTL. It uses overlayfs to combine a compressed read-only SquashFS with JFFS2.
Translation layers
See also
List of flash file systems
Wear leveling
Write amplification
References
External links
Presentation on various Flash File Systems – 2007-09-24
Article regarding various Flash File Systems – 2005 USENIX Annual Conference
Survey of various Flash File Systems – 2005-08-10
Anatomy of Linux Flash File Systems – 2008-05-20
Computer memory
Computer file systems |
346643 | https://en.wikipedia.org/wiki/Graduate%20Record%20Examinations | Graduate Record Examinations | The Graduate Record Examinations (GRE) is a standardized test that is an admissions requirement for many graduate schools in the United States and Canada and few in other countries. The GRE is owned and administered by Educational Testing Service (ETS). The test was established in 1936 by the Carnegie Foundation for the Advancement of Teaching.
According to ETS, the GRE aims to measure verbal reasoning, quantitative reasoning, analytical writing, and critical thinking skills that have been acquired over a long period of learning. The content of the GRE consists of certain specific algebra, geometry, arithmetic, and vocabulary sections. The GRE General Test is offered as a computer-based exam administered at testing centers and institution owned or authorized by Prometric. In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely between schools and departments within schools. The importance of a GRE score can range from being a mere admission formality to an important selection factor.
The GRE was significantly overhauled in August 2011, resulting in an exam that is not adaptive on a question-by-question basis, but rather by section, so that the performance on the first verbal and math sections determines the difficulty of the second sections presented. Overall, the test retained the sections and many of the question types from its predecessor, but the scoring scale was changed to a 130 to 170 scale (from a 200 to 800 scale).
The cost to take the test is US$205, although ETS will reduce the fee under certain circumstances. It also provides financial aid to those GRE applicants who prove economic hardship. ETS does not release scores that are older than five years, although graduate program policies on the acceptance of scores older than five years will vary.
History
The Graduate Record Examinations was "initiated in 1936 as a joint experiment in higher education by the graduate school deans of four Ivy League universities and the Carnegie Foundation for the Advancement of Teaching."
The first universities to experiment the test on their students were Harvard University, Yale University, Princeton University and Columbia University. The University of Wisconsin was the first public university to ask their students to take the test in 1938. It was first given to students at the University of Iowa in 1940, where it was analysed by psychologist Dewey Stuit. It was first taken by students at Texas Tech University in 1942. In 1943, it was taken by students at Michigan State University, where it was analyzed by Paul Dressel. It was taken by over 45,000 students applying to 500 colleges in 1948.
"Until the Educational Testing Service was established in January, 1948, the Graduate Record Examination remained a project of the Carnegie Foundation."
2011 revision
In 2006, ETS announced plans to make significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.
On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE. The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned changes as an explanation for the cancellation. The ETS stated, however, that they do plan "to implement many of the planned test content improvements in the future", although specific details regarding those changes were not initially announced.
Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly centered on "fill in the blank" type answers for the mathematics section that requires the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS announced plans to introduce two of these new types of questions in each quantitative section, while the majority of questions would be presented in the regular format.
Since January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."
In December 2009, ETS announced plans to move forward with significant revisions to the GRE in 2011. Changes include a new 130–170 scoring scale, the elimination of certain question types such as antonyms and analogies, the addition of an online calculator, and the elimination of the CAT format of question-by-question adjustment, in favor of a section by section adjustment.
On August 1, 2011, the Revised GRE General test replaced General GRE test. The revised GRE is said to be better by design and provides a better test taking experience. The new types of questions in the revised format are intended to test the skills needed in graduate and business schools programs. From July 2012 onwards GRE announced an option for users to customize their scores called ScoreSelect.
Before October 2002
The earliest versions of the GRE tested only for verbal and quantitative ability. For a number of years before October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section was replaced by the Analytical Writing Assessment.
Structure
The computer-based GRE General Test consists of six sections. The first section is always the analytical writing section involving separately timed issue and argument tasks. The next five sections consist of two verbal reasoning sections, two quantitative reasoning sections, and either an experimental or research section. These five sections may occur in any order. The experimental section does not count towards the final score but is not distinguished from the scored sections. Unlike the computer adaptive test before August 2011, the GRE General Test is a multistage test, where the examinee's performance on earlier sections determines the difficulty of subsequent sections. This format allows the examined person to freely move back and forth between questions within each section, and the testing software allows the user to "mark" questions within each section for later review if time remains. The entire testing procedure lasts about 3 hours 45 minutes. One-minute breaks are offered after each section and a 10-minute break after the third section.
The paper-based GRE General Test also consists of six sections. The analytical writing is split up into two sections, one section for each issue and argument task. The next four sections consist of two verbal and two quantitative sections in varying order. There is no experimental section on the paper-based test.
Verbal section
The computer-based verbal sections assess reading comprehension, critical reasoning, and vocabulary usage. The verbal test is scored on a scale of 130–170, in 1-point increments. (Before August 2011, the scale was 200–800, in 10-point increments.) In a typical examination, each verbal section consists of 20 questions to be completed in 30 minutes. Each verbal section consists of about 6 text completion, 4 sentence equivalence, and 10 critical reading questions. The changes in 2011 include a reduced emphasis on rote vocabulary knowledge and the elimination of antonyms and analogies. Text completion items have replaced sentence completions and new reading question types allowing for the selection of multiple answers were added.
Quantitative section
The computer-based quantitative sections assess basic high school level mathematical knowledge and reasoning skills. The quantitative test is scored on a scale of 130–170, in 1-point increments (Before August 2011 the scale was 200–800, in 10-point increments). In a typical examination, each quantitative section consists of 20 questions to be completed in 35 minutes. Each quantitative section consists of about 8 quantitative comparisons, 9 problem solving items, and 3 data interpretation questions. The changes in 2011 include the addition of numeric entry items requiring the examinee to fill in the blank and multiple-choice items requiring the examinee to select multiple correct responses.
Analytical writing section
The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0–6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holist scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.
Issue Task
The test taker is given 30 minutes to write an essay about a selected topic. Issue topics are selected from a pool of questions, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.
Argument Task
The test taker will be given an argument (i.e. a series of facts and considerations leading to a conclusion) and asked to write an essay that critiques the argument. Test takers are asked to consider the argument's logic and to make suggestions about how to improve the logic of the argument. Test takers are expected to address the logical flaws of the argument and not provide a personal opinion on the subject. The time allotted for this essay is 30 minutes. The Arguments are selected from a pool of topics, which the GRE Program has published in its entirety. Individuals preparing for the GRE may access the pool of tasks on the ETS website.
Experimental section
The experimental section, which can be either verbal or quantitative, contains new questions ETS is considering for future use. Although the experimental section does not count towards the test-taker's score, it is unidentified and appears identical to the scored sections. Because test takers have no definite way of knowing which section is experimental, it is typically advised that test takers try their best and be focused on every section. Sometimes an identified research section at the end of the test is given instead of the experimental section. There is no experimental section on the paper-based GRE.
Scoring
An examinee can miss one or more questions on a multiple-choice section and still receive a perfect score of 170. Likewise, even if no question is answered correctly, 130 is the lowest possible score. Verbal and qualitative reasoning scores are given in one-point increments, and analytical writing scores are given in half-point increments on a scale of 0 to 6.
Scaled score percentiles
The percentiles for the current General test and the concordance with the prior format are as follows. According to interpretive data published by ETS, from July 1, 2015 to June 30, 2018 about 2 million people have taken the test. Based on performance of individuals the mean and standard deviation of verbal section were 150.24 and 8.44. Whereas, mean and standard deviation for quantitative section were 153.07 and 9.24. Analytical writing has a mean of 3.55 with a standard deviation of 0.86.
"Field-wise distribution" of test takers is "limited to those who earned their college degrees up to two years before the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.
GRE Subject Tests
In addition to the General Test, there are also four GRE Subject Tests testing knowledge in the specific areas of Chemistry, Mathematics, Physics, and Psychology. The length of each exam is 170 minutes.
In the past, subject tests were also offered in the areas of Computer Science, Economics, Revised Education, Engineering, Geology, History, Music, Political Science, Sociology, and Biochemistry, Cell and Molecular Biology. In April 1998, the Revised Education and Political Science exams were discontinued. In April 2000, the History and Sociology exams were discontinued; with Economics, Engineering, Music, and Geology being discontinued in April 2001. The Computer Science exam was discontinued after April 2013. Biochemistry, Cell and Molecular Biology was discontinued in December 2016. The GRE Biology Test and GRE Literature in English Test tests were discontinued in April 2021.
Use in admissions
Many graduate schools in the United States require GRE results as part of the admissions process. The GRE is a standardized test intended to measure all graduates' abilities in tasks of general academic nature (regardless of their fields of specialization) and the extent to which undergraduate education has developed their verbal skills, quantitative skills, and abstract thinking.
In addition to GRE scores, admission to graduate schools depends on several other factors, such as GPA, letters of recommendation, and statements of purpose. Furthermore, unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and weight of GRE scores vary considerably not only from school to school, but also from department to department and program to program. For instance, most business schools and economics programs require very high GRE or GMAT scores for entry, while engineering programs are known to allow more score variation. Liberal arts programs may only consider the applicant's verbal score, while mathematics and science programs may only consider quantitative ability. Some schools use the GRE in admissions decisions, but not in funding decisions; others use it for selection of scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may be a general requirement for graduate admissions imposed by the university, while particular departments may not consider the scores at all. Graduate schools will typically provide the average scores of previously admitted students and information about how the GRE is considered in admissions and funding decisions. In some cases, programs have hard cut off requirements for the GRE; for example, the Yale Economics PhD program requires a minimum quantitative score of 160 to apply. The best way to ascertain how a particular school or program evaluates a GRE score in the admissions process is to contact the person in charge of graduate admissions for the specific program in question.
In February 2016, the University of Arizona James E. Rogers College of Law became the first law school to accept either the GRE or the Law School Admissions Test (LSAT) from all applicants. The college made the decision after conducting a study showing that the GRE is a valid and reliable predictor of students' first-term law school grades.
In the spring of 2017, Harvard Law School announced it was joining University of Arizona Law in accepting the GRE in addition to the LSAT from applicants to its three-year J.D. program.
MBA
GRE score can be used for MBA programs in some schools.
The GMAT (Graduate Management Admission Test) is a computer-adaptive standardized test in mathematics and the English language for measuring aptitude to succeed academically in graduate business studies.
Business schools commonly use the test as one of many selection criteria for admission into an MBA program. Starting in 2009, many business schools began accepting the GRE in lieu of a GMAT score. Policies varied widely for several years. However, as of the 2014–2015 admissions season, most business schools accept both tests equally. Either a GMAT score or a GRE score can be submitted for an application to an MBA program. Business schools also accept either score for their other (non-MBA) Masters and Ph.D. programs.
The primary issue on which business school test acceptance policies vary is in how old a GRE or GMAT score can be before it is no longer accepted. The standard is that scores cannot be more than 5 years old (e.g., Wharton, MIT Sloan, Columbia Business School).
Intellectual clubs
Some GRE scores (usually pre-2002 ones) are accepted as qualifying evidence to intellectual clubs such as Intertel, Mensa and the Triple Nine Society, the minimum passing score depending on the selectivity of the society and the time period when the test was taken. Intertel accepts scores in the 99th percentile obtained after 2011, while Mensa and TNS do not accept any score post-September 2001.
Preparation
A variety of resources are available for those wishing to prepare for the GRE. ETS provides preparation software called PowerPrep, which contains two practice tests of retired questions, as well as further practice questions and review material. Since the software replicates both the test format and the questions used, it can be useful to predict the actual GRE scores. ETS does not license their past questions to any other company, making them the only source for official retired material. ETS used to publish the "BIG BOOK" which contained a number of actual GRE questions; however, this publishing was abandoned. Several companies provide courses, books, and other unofficial preparation materials.
Some students taking the GRE use a test preparation company. Students who do not use these courses often rely on material from university text books, GRE preparation books, sample tests, and free web resources.
Testing locations
While the general and subject tests are held at many undergraduate institutions, the computer-based general test can be held in over 1000 locations with appropriate technological accommodations. In the United States, students in major cities or from large universities will usually find a nearby test center, while those in more isolated areas may have to travel a few hours to an urban or university location. Many industrialized countries also have test centers, but at times test-takers must cross country borders.
Criticism
Bias
Algorithmic bias
Critics have claimed that the computer-adaptive methodology may discourage some test takers since the question difficulty changes with performance. For example, if the test-taker is presented with remarkably easy questions halfway into the exam, they may infer that they are not performing well, which will influence their abilities as the exam continues, even though question difficulty is subjective. By contrast, standard testing methods may discourage students by giving them more difficult items earlier on.
Critics have also stated that the computer-adaptive method of placing more weight on the first several questions is biased against test takers who typically perform poorly at the beginning of a test due to stress or confusion before becoming more comfortable as the exam continues. On the other hand, standard fixed-form tests could equally be said to be "biased" against students with less testing stamina since they would need to be approximately twice the length of an equivalent computer adaptive test to obtain a similar level of precision.
Implicit bias
The GRE has also been subjected to the same racial bias criticisms that have been lodged against other admissions tests. In 1998, The Journal of Blacks in Higher Education noted that the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and 564, respectively. The National Association of Test Directors Symposia in 2004 stated a belief that simple mean score differences may not constitute evidence of bias unless the populations are known to be equal in ability. A more effective, accepted, and empirical approach is the analysis of differential test functioning, which examines the differences in item response theory curves for subgroups; the best approach for this is the DFIT framework.
Weak indicator of graduate school performance
The GREs are criticized for not being a true measure of whether a student will be successful in graduate school. Robert Sternberg (now of Cornell University; working at Yale University at the time of the study), a long-time critic of modern intelligence testing in general, found the GRE general test was weakly predictive of success in graduate studies in psychology. The strongest relationship was found for the now-defunct analytical portion of the exam.
The ETS published a report ("What is the Value of the GRE?") that points out the predictive value of the GRE on a student's index of success at the graduate level. The problem with earlier studies is the statistical phenomenon of restriction of range. A correlation coefficient is sensitive to the range sampled for the test. Specifically, if only students accepted to graduate programs are studied (in Sternberg & Williams and other research), the relationship is occluded. Validity coefficients range from .30 to .45 between the GRE and both first year and overall graduate GPA in ETS' study.
Kaplan and Saccuzzo state that the criterion that the GRE best predicts is first-year grades in graduate school. However, this correlation is only in the high tens to low twenties. "If the test correlates with a criterion at the .4 level, then it accounts for 16% of the variability in that criterion, with the other 84% resulting from unknown factors and errors" (p. 303). Graduate schools may be placing too much importance on standardized tests rather than on factors that more fully account for graduate school success, such as prior research experience, GPAs, or work experience. While graduate schools do consider these areas, many times schools will not consider applicants that score below a current score of roughly 314 (1301 prior score). Kaplan and Saccuzzo also state that "the GRE predict[s] neither clinical skill nor even the ability to solve real-world problems" (p. 303).
In 2007, a study by a university found a correlation of .30 to .45 between the GRE and both first year and overall graduate GPA. The correlation between GRE score and graduate school completion rates ranged from .11 (for the now defunct analytical section) to .39 (for the GRE subject test). Correlations with faculty ratings ranged from .35 to .50.
Historical susceptibility to cheating
In May 1994, Kaplan, Inc warned ETS, in hearings before a New York legislative committee, that the small question pool available to the computer-adaptive test made it vulnerable to cheating. ETS assured investigators that it was using multiple sets of questions and that the test was secure. This was later discovered to be incorrect.
In December 1994, prompted by student reports of recycled questions, then Director of GRE Programs for Kaplan, Inc and current CEO of Knewton, Jose Ferreira led a team of 22 staff members deployed to 9 U.S. cities to take the exam. Kaplan, Inc then presented ETS with 150 questions, representing 70–80% of the GRE. According to early news releases, ETS appeared grateful to Stanley H. Kaplan, Inc for identifying the security problem. However, on December 31, ETS sued Kaplan, Inc for violation of a federal electronic communications privacy act, copyright laws, breach of contract, fraud, and a confidentiality agreement signed by test-takers on test day. On January 2, 1995, an agreement was reached out of court.
Additionally, in 1994, the scoring algorithm for the computer-adaptive form of the GRE was discovered to be insecure. ETS acknowledged that Kaplan, Inc employees, led by Jose Ferreira, reverse-engineered key features of the GRE scoring algorithms. The researchers found that a test taker's performance on the first few questions of the exam had a disproportionate effect on the test taker's final score. To preserve the integrity of scores, ETS revised its scoring and uses a more sophisticated scoring algorithm.
See also
List of admissions tests
GRE Subject Tests:
GRE Biology Test
GRE Chemistry Test
GRE Literature in English Test
GRE Mathematics Test
GRE Physics Test
GRE Psychology Test
Other tests:
Law School Admission Test (LSAT)
Medical College Admission Test (MCAT)
Graduate Management Admission Test (GMAT)
Graduate Aptitude Test in Engineering (GATE)
SAT
ACT (test)
Test of English as a Foreign Language (TOEFL)
International English Language Testing System (IELTS)
References
External links
GRE information website for residents of Mainland China, English version - by the Chinese National Education Examinations Authority
1936 establishments in the United States
Entrance examinations
Standardized tests in the United States
Standardized tests for English language |
897495 | https://en.wikipedia.org/wiki/Paste%20%28Unix%29 | Paste (Unix) | paste is a Unix command line utility which is used to join files horizontally (parallel merging) by outputting lines consisting of the sequentially corresponding lines of each file specified, separated by tabs, to the standard output. It is effectively the horizontal equivalent to the utility cat command which operates on the vertical plane of two or more files.
History
The version of paste bundled in GNU coreutils was written by David M. Ihnat and David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Usage
The utility is invoked with the following syntax:
paste [options] [file1 ..]
Description
Once invoked, will read all its arguments. For each corresponding line, will append the contents of each file at that line to its output along with a tab. When it has completed its operation for the last file, will output a newline character and move on to the next line.
exits after all streams return end of file. The number of lines in the output stream will equal the number of lines in the input file with the largest number of lines. Missing values are represented by empty strings.
Though potentially useful, an option to have paste emit an alternate string for a missing field (such as "NA") is not standard.
A sequence of empty records at the bottom of a column of the output stream may or may not have been present in the input file corresponding to that column as explicit empty records, unless you know the input file supplied all rows explicitly (e.g. in the canonical case where all input files all do indeed have the same number of lines).
Options
The utility accepts the following options:
-d delimiters, which specifies a list of delimiters to be used instead of tabs for separating consecutive values on a single line. Each delimiter is used in turn; when the list has been exhausted, begins again at the first delimiter.
, which causes to append the data in serial rather than in parallel; that is, in a horizontal rather than vertical fashion.
Examples
For the following examples, assume that is a plain-text file that contains the following information:
Mark Smith
Bobby Brown
Sue Miller
Jenny Igotit
and that is another plain-text file that contains the following information:
555-1234
555-9876
555-6743
867-5309
The following example shows the invocation of with and as well as the resulting output:
$ paste names.txt numbers.txt
Mark Smith 555-1234
Bobby Brown 555-9876
Sue Miller 555-6743
Jenny Igotit 867-5309
When invoked with the option, the output of is adjusted such that the information is presented in a horizontal fashion:
$ paste -s names.txt numbers.txt
Mark Smith Bobby Brown Sue Miller Jenny Igotit
555-1234 555-9876 555-6734 867-5309
Finally, the use of the option (delimiters) is illustrated in the following example:
$ paste -d ., names.txt numbers.txt
Mark Smith.555-1234
Bobby Brown.555-9876
Sue Miller.555-6743
Jenny Igotit.867-5309
As an example usage of both, the command can be used to concatenate multiple consecutive lines into a single row:
$ paste -s -d '\t\n' names.txt
Mark Smith Bobby Brown
Sue Miller Jenny Igotit
See also
join
cut
List of Unix commands
References
External links
Unix text processing utilities
Unix SUS2008 utilities |
3745271 | https://en.wikipedia.org/wiki/Insight%20Segmentation%20and%20Registration%20Toolkit | Insight Segmentation and Registration Toolkit |
ITK is a cross-platform, open-source application development framework widely used for the development of image segmentation and image registration programs. Segmentation is the process of identifying and classifying data found in a digitally sampled representation. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be aligned with an MRI scan in order to combine the information contained in both.
ITK was developed with funding from the National Library of Medicine (U.S.) as an open resource of algorithms for analyzing the images of the Visible Human Project. ITK stands for The Insight Segmentation and Registration Toolkit. The toolkit provides leading-edge segmentation and registration algorithms in two, three, and more dimensions. ITK uses the CMake build environment to manage the configuration process. The software is implemented in C++ and it is wrapped for Python. An offshoot of the ITK project providing a simplified interface to ITK in eight programming languages, SimpleITK, is also under active development.
Introduction
Origins
In 1999 the US National Library of Medicine of the National Institutes of Health awarded a three-year contract to develop an open-source registration and segmentation toolkit, which eventually came to be known as the Insight Toolkit (ITK). ITK's NLM Project Manager was Dr. Terry Yoo, who coordinated the six prime contractors who made up the Insight Software Consortium. These consortium members included the three commercial partners GE Corporate R&D, Kitware, Inc., and MathSoft (the company name is now Insightful); and the three academic partners University of North Carolina (UNC), University of Tennessee (UT), and University of Pennsylvania (UPenn). The Principal Investigators for these partners were, respectively, Bill Lorensen at GE CRD, Will Schroeder at Kitware, Vikram Chalana at Insightful, Stephen Aylward with Luis Ibáñez at UNC (both of whom subsequently moved to Kitware), Ross Whitaker with Josh Cates at UT (both now at Utah), and Dimitris Metaxas at UPenn (Dimitris Metaxas is now at Rutgers University). In addition, several subcontractors rounded out the consortium including Peter Ratiu at Brigham & Women's Hospital, Celina Imielinska and Pat Molholt at Columbia University, Jim Gee at UPenn's Grasp Lab, and George Stetten at University of Pittsburgh.
Technical details
ITK is an open-source software toolkit for performing registration and segmentation. Segmentation is the process of identifying and classifying data found in a digitally sampled representation. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be aligned with an MRI scan in order to combine the information contained in both.
ITK is implemented in C++. ITK is cross-platform, using the CMake build environment to manage the compilation process. In addition, an automated wrapping process generates interfaces between C++ and other programming languages such as Java and Python. This enables developers to create software using a variety of programming languages. ITK's implementation employs the technique of generic programming through the use of C++ templates.
Because ITK is an open-source project, developers from around the world can use, debug, maintain, and extend the software. ITK uses a model of software development referred to as extreme programming. Extreme programming collapses the usual software creation methodology into a simultaneous and iterative process of design-implement-test-release. The key features of extreme programming are communication and testing. Communication among the members of the ITK community is what helps manage the rapid evolution of the software. Testing is what keeps the software stable. In ITK, an extensive testing process (using CDash) is in place that measures the quality on a daily basis. The ITK Testing Dashboard is posted continuously, reflecting the quality of the software.
Developers and contributors
The Insight Toolkit was initially developed by six principal organizations
Kitware
GE Corporate R&D
Insightful
University of North Carolina at Chapel Hill
University of Utah
University of Pennsylvania
and three subcontractors
Harvard Brigham & Women's Hospital
University of Pittsburgh
Columbia University
After its inception the software continued growing with contributions from other institutions including
University of Iowa
Georgetown University
Stanford University
King's College London
Creatis INSA
Funding
The funding for the project is from the National Library of Medicine at the National Institutes of Health. NLM in turn was supported by member institutions of NIH (see sponsors).
The goals for the project include the following:
Support the Visible Human Project.
Establish a foundation for future research.
Create a repository of fundamental algorithms.
Develop a platform for advanced product development.
Support commercial application of the technology.
Create conventions for future work.
Grow a self-sustaining community of software users and developers.
The source code of the Insight Toolkit is distributed under an Apache 2.0 License (as approved by the Open Source Initiative)
The philosophy of Open Source of the Insight Toolkit was extended to support open science, in particular by providing open access to publications in the domain of Medical Image Processing. These publications are made freely available through the Insight Journal
Community participation
Because ITK is an open-source system, anybody can make contributions to the project. A person interested in contributing to ITK can take the following actions
Read the ITK Software Guide. (This book can be purchased from Kitware's store.)
Read the instructions on how to contribute classes and algorithms to the Toolkit via submissions to the Insight Journal
Obtain access to GitHub.
Follow the Git contribution instructions.
Join the ITK Discourse discussion. The community is open to everyone.
Anyone can submit a patch, and write access to the repository is not necessary to get a patch merged or retain authorship credit. For more information, see the ITK Bar Camp documentation on how to submit a patch.
Copyright and license
ITK is copyrighted by the Insight Software Consortium, a non-profit alliance of organizations and individuals interested in supporting ITK. Starting with ITK version 3.6, the software is distributed under a BSD open-source license. It allows use for any purpose, with the possible exception of code found in the patented directory, and with proper recognition. The full terms of the copyright and the license are available at . Version 4.0 uses Apache 2.0 License.
The licensed was changed to Apache 2.0 with version 4.0 to adopt a modern license with patent protection provisions. From version 3.6 to 3.20, a simplified BSD license was used. Versions of ITK previous to ITK 3.6 were distributed under a modified BSD License. The main motivation for adopting a BSD license starting with ITK 3.6, was to have an OSI-approved license.
Technical Summary
The following sections summarize the technical features of the NLM's Insight ITK toolkit.
Design Philosophy
The following are key features of the toolkit design philosophy.
The toolkit provides data representation and algorithms for performing segmentation and registration. The focus is on medical applications; although the toolkit is capable of processing other data types.
The toolkit provides data representations in general form for images (arbitrary dimension) and (unstructured) meshes.
The toolkit does not address visualization or graphical user interface. These are left to other toolkits (such as VTK, VISPACK, 3DViewnix, MetaImage, etc.)
The toolkit provides minimal tools for file interface. Again, this is left to other toolkits/libraries to provide.
Multi-threaded (shared memory) parallel processing is supported.
The development of the toolkit is based on principles of extreme programming. That is, design, implementation, and testing is performed in a rapid, iterative process. Testing forms the core of this process. In Insight, testing is performed continuously as files are checked in, and every night across multiple platforms and compilers. The ITK testing dashboard, where testing results are posted, is central to this process.
Architecture
The following are key features of the toolkit architecture.
The toolkit is organized around a data-flow architecture. That is, data is represented using data objects which are in turn processed by process objects (filters). Data objects and process objects are connected together into pipelines. Pipelines are capable of processing the data in pieces according to a user-specified memory limit set on the pipeline.
Object factories are used to instantiate objects. Factories allow run-time extension of the system.
A command/observer design pattern is used for event processing.
Implementation philosophy
The following are key features of the toolkit implementation philosophy.
The toolkit is implemented using generic programming principles. Such heavily templated C++ code challenges many compilers; hence development was carried out with the latest versions of the MSVC, Sun, gcc, Intel, and SGI compilers.
The toolkit is cross-platform (Unix, Windows and Mac OS X).
The toolkit supports multiple language bindings, including such languages as Tcl, Python, and Java. These bindings are generated automatically using an auto-wrap process.
The memory model depends on "smart pointers" that maintain a reference count to objects. Smart pointers can be allocated on the stack, and when scope is exited, the smart pointers disappear and decrement their reference count to the object that they refer to.
Build environment
ITK uses the CMake (cross-platform make) build environment. CMake is an operating system and compiler independent build process that produces native build files appropriate to the OS and compiler that it is run with. On Unix CMake produces makefiles and on Windows CMake generates projects and workspaces.
Testing environment
ITK supports an extensive testing environment. The code is tested daily (and even continuously) on many hardware/operating system/compiler combinations and the results are posted daily on the ITK testing dashboard. We use Dart to manage the testing process, and to post the results to the dashboard.
Background references: C++ patterns and generics
ITK uses many advanced design patterns and generic programming. You may find these references useful in understanding the design and syntax of Insight.
Design Patterns. by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, Grady Booch
Generic Programming and the Stl : Using and Extending the C++ Standard Template Library (Addison-Wesley Professional Computing Series) by Matthew H. Austern
Advanced C++ Programming Styles and Idioms by James O. Coplien
C/C++ Users Journal
C++ Report
Examples
Gaussian-smoothed image gradient
#include "itkImage.h"
int main()
{
using ImageType = itk::Image< unsigned char, 3 >;
using ReaderType = itk::ImageFileReader< ImageType >;
using WriterType = itk::ImageFileWriter< ImageType >;
using FilterType = itk::GradientRecursiveGaussianImageFilter< ImageType, ImageType >;
ReaderType::Pointer reader = ReaderType::New();
WriterType::Pointer writer = WriterType::New();
reader->SetFileName( "lungCT.dcm" );
writer->SetFileName( "smoothedLung.hdr" );
FilterType::Pointer filter = FilterType::New();
filter->SetInput( reader->GetOutput() );
writer->SetInput( filter->GetOutput() );
filter->SetSigma( 45.0 );
try
{
writer->Update();
}
catch( itk::ExceptionObject & excp )
{
std::cerr << excp << std::endl;
return EXIT_FAILURE;
}
}
Region growing segmentation
#include "itkImage.h"
int main()
{
using InputImageType = itk::Image< signed short, 3 >;
using OutputImageType = itk::Image< unsigned char, 3 >;
using ReaderType = itk::ImageFileReader< InputImageType >;
using WriterType = itk::ImageFileWriter< OutputImageType >;
using FilterType = itk::ConnectedThresholdImageFilter< InputImageType, OutputImageType >;
ReaderType::Pointer reader = ReaderType::New();
WriterType::Pointer writer = WriterType::New();
reader->SetFileName( "brain.dcm" );
writer->SetFileName( "whiteMatter.hdr" );
FilterType::Pointer filter = FilterType::New();
filter->SetInput( reader->GetOutput() );
writer->SetInput( filter->GetOutput() );
filter->SetMultiplier( 2.5 );
ImageType::IndexType seed;
seed[0] = 142;
seed[1] = 97;
seed[2] = 63;
filter->AddSeed( seed );
try
{
writer->Update();
}
catch( itk::ExceptionObject & excp )
{
std::cerr << excp << std::endl;
return EXIT_FAILURE;
}
}
Additional information
Resources
A number of resources are available to learn more about ITK.
The ITK web pages are located at .
Users and developers alike should read the ITK Software Guide
Many compilable examples are available on the ITK Examples Wiki
Tutorials are available at
The software can be downloaded from .
Developers, or users interested in contributing code, should look in the document Insight/Documentation/InsightDeveloperStart.pdf or InsightDeveloperStart.doc found in the source code distribution.
Developers should also look at the ITK style guide Insight/Documentation/Style.pdf found in the source distribution.
Applications
A great way to learn about ITK is to see how it is used. There are four places to find applications of ITK.
The Insight/Examples/ source code examples distributed with ITK. The source code is available. In addition, it is heavily commented and works in combination with the ITK Software Guide.
The separate InsightApplications checkout.
The Applications web pages. These are extensive descriptions, with images and references, of the examples found in #1 above.
The testing directories distributed with ITK are simple, mainly undocumented examples of how to use the code.
In 2004 ITK-SNAP (website) was developed from SNAP and became a popular free segmentation software using ITK and having a nice and simple user interface.
Data
Data is available in ITK data.kitware.com Girder Community.
See also the ITK Data web page.
See also
Related tools
CMake
VTK
Contacts
Visit the ITK discussion forum for contacts and help from the community.
References
External links
ITK
Computer vision software
Free computer libraries
Free science software
Free software programmed in C++
Image segmentation
Software using the Apache license |
14490007 | https://en.wikipedia.org/wiki/Internet%20Mapping%20Project | Internet Mapping Project | The Internet Mapping Project
was started by William Cheswick and Hal Burch at Bell Labs in 1997. It has collected and preserved traceroute-style paths to some hundreds of thousands of networks almost daily since 1998. The project included visualization of the Internet data, and the Internet maps were widely disseminated.
The technology is now used by Lumeta, a spinoff of Bell Labs, to map corporate and government networks.
Although Cheswick left Lumeta in September 2006, Lumeta continues to map both the IPv4 and IPv6 Internet. The data allows for both a snapshot and view over time of the routed infrastructure of a particular geographical area, company, organization, etc.
Cheswick continues to collect and preserve the data, and it is available for research purposes. According to Cheswick, a main goal of the project was to collect the data over time, and make a time-lapse movie of the growth of the Internet.
Techniques
The techniques available for network discovery rely on hop-limited probes of the type used by the Unix traceroute utility or the Windows NT tracert.exe tool. A Traceroute-style network probe follows the path that network packets take from a source node to a destination node. This technique uses Internet Protocol packets with an 8-bit time to live (TTL) header field. As a packet passes through routers on the Internet, each router decreases the TTL value by one until it reaches zero. When a router receives a packet with a TTL value of zero, it drops the packet instead of forwarding it. At this point, it sends an Internet Control Message Protocol (ICMP) error message to the source node where the packet originated indicating that the packet exceeded its maximum transit time.
Active Probing – Active probing is a series of probes set out through a network to obtain data. Active probing is used in internet mapping to discover the topology of the Internet. Topology maps of the Internet are an important tool for characterizing the infrastructure and understanding the properties, behavior and evolution of the Internet.
Other internet mapping projects
Hand Drawn Maps of Internet from 1973.
The Center for Applied Internet Data Analysis (CAIDA) collects, monitors, analyzes, and maps several forms of Internet traffic data concerning network topology. Their "Internet Topology Maps also referred to as AS-level Internet Graphs [are being generated] in order to visualize the shifting topology of the Internet over time."
The Opte Project, started in 2003 by engineer Barrett Lyon, using traceroute and BGP routes for mapping.
New Hampshire Project – In 2010, the U.S. Department of Commerce has awarded the University of New Hampshire's Geographically Referenced Analysis and Information Transfer (NH GRANIT) project approximately $1.7 million to manage a program that will inventory and map current and planned broadband coverage available to the state's businesses, educators, and citizens. As a part of this project, The New Hampshire Broadband Mapping Program (NHBMP) was created as a coordinated, multi-agency initiative funded by the American Recovery and Reinvestment Act through the National Telecommunications and Information Administration (NTIA), and is part of a national effort to expand high-speed Internet access and adoption through improved data collection and broadband planning.
In 2015, Kevin Kelly (editor), cofounder of Wired Magazine, started his own Internet Mapping Project to understand how people conceive the internet. He wanted to discover the maps that people have in their mind as they navigate the vast internet by having them submit hand drawn pictures. So far, he has collected close to 80 submissions by people of all ages, nationalities and expertise levels, ranging from the concrete to the conceptual to the comic.
See also
Network mapping
Route analytics
Reference
Internet architecture
1997 establishments in the United States |
33237904 | https://en.wikipedia.org/wiki/Applied%20Information%20Science%20in%20Economics | Applied Information Science in Economics | The Applied Information Science in Economics (Прикладная информатика в Экономике) or Applied Computer Science in Economics is a professional qualification generally awarded in Russian Federation. The degree inherited from the U.S.S.R. education system also known as Specialist degree. The degree is awarded after five years of full-time study and includes several internships, course-works, thesis writing and defense.
The degree has similarities with German Magister Artium or Diplom degree. However, due to the Bologna Process number of such degrees are declining.
Degree focuses on applying mathematical methods in economics involving maximum information technology. It is very close to applied mathematics, but includes also major part of computer science.
List of specialty codes in the education system
080801 - Applied computer science in economics
351400 - Applied computer science
Fields of activity
Organization and management;
Project design;
Experimental research;
Marketing;
Consulting;
Operational and Maintenance.
Major
Information Science and Programming.
High Level Methods of Information Science and Programming.
Information Technologies in Economics.
Computer Systems, Networks and Telecommunications Services.
Operational Environments, Systems and Shells.
Architecture and Design of Information Systems for Companies.
Data Bases.
Information security.
Information Management.
Imitative Simulation.
See also
Specialist degree
Academic degree
Master's degree
Education in Russia
Information science
Computer science
References
Information science
Economics education
Academic degrees |
294399 | https://en.wikipedia.org/wiki/Code%20Co-op | Code Co-op | Code Co-op is the peer-to-peer revision control system made by Reliable Software.
Distinguishing features
Code Co-op is a distributed revision control system of the replicated type.
It uses peer-to-peer architecture to share projects among developers and to control changes to files. Instead of using a centralized database (the repository), it replicates its own database on each computer involved in the project.
The replicas are synchronized by the exchange of (differential) scripts. The exchange of scripts may proceed using different transports, including e-mail (support for SMTP and POP3, integration with MAPI clients, Gmail) and LAN.
Code Co-op has a built-in peer-to-peer wiki system, which can be used to integrate documentation with a software project. It is also possible to create text-based Wiki databases, which can be queried using simplified SQL directly from wiki pages.
Standard features
Distributed development support through E-mail, LAN, or VPN
Change-based model—modifications to multiple files are checked in as one transaction
File additions, deletions, renames, and moves are treated on the same level as edits—they can be added in any combination to a check-in changeset
File changes can be reviewed before a check-in using a built-in or user-defined differ
Synchronization changes can be reviewed in the same manner by the recipients
Three-way visual merge
Project history is replicated on each machine. Historical version can be reviewed, compared, or restored
Integration with Microsoft SCC clients, including Visual Studio
History
Code Co-op was one of the first distributed version control systems. It debuted at the 7th Workshop on System Configuration Management in May 1997.
The development of Code Co-op started in 1996, when Reliable Software, the distributed software company that makes it, was established. Reliable Software needed a collaboration tool that would work between the United States and Poland. The only dependable and affordable means of communication between the two countries was e-mail, hence the idea of using e-mail for the exchange of diffs. Of course, with such slow transport, using a centralized repository was infeasible. Each user of Code Co-op had to have a full replica of the repository, including the history of changes.
The problem was reduced to that of designing a distributed database that uses slow and unreliable transport for synchronization (later, faster LAN transport was also added). It also followed that the synchronization between multiple sites must use some kind of peer-to-peer protocol.
In 2018, the C++ source code for Code Co-op was released under the MIT License.
Theoretical foundations
Code Co-op is an example of a distributed database. Local repositories are considered the replicas of this virtual database. Each check-in corresponds to a distributed commit—a non-blocking version of a two-phase commit.
References
External links
ColdFusion Developer's Journal: Code Co-op Version Control Software from Reliable Software
Larkware News:
Proprietary version control systems
Version control systems |
6028719 | https://en.wikipedia.org/wiki/Spring%20Security | Spring Security | Spring Security is a Java/Java EE framework that provides authentication, authorization and other security features for enterprise applications. The project was started in late 2003 as 'Acegi Security' (pronounced Ah-see-gee , whose letters are the first, third, fifth, seventh, and ninth characters from the English alphabet, in order to prevent name conflicts) by Ben Alex, with it being publicly released under the Apache License in March 2004. Subsequently, Acegi was incorporated into the Spring portfolio as Spring Security, an official Spring sub-project. The first public release under the new name was Spring Security 2.0.0 in April 2008, with commercial support and training available from SpringSource.
Authentication flow
Diagram 1 shows the basic flow of an authentication request using the Spring Security system. It shows the different filters and how they interact from the initial browser request, to either a successful authentication or an HTTP 403 error.
Key authentication features
LDAP (using both bind-based and password comparison strategies) for centralization of authentication information.
Single sign-on capabilities using the popular Central Authentication Service.
Java Authentication and Authorization Service (JAAS) LoginModule, a standards-based method for authentication used within Java. Note this feature is only a delegation to a JAAS Loginmodule.
Basic access authentication as defined through RFC 1945.
Digest access authentication as defined through RFC 2617 and RFC 2069.
X.509 client certificate presentation over the Secure Sockets Layer standard.
CA, Inc SiteMinder for authentication (a popular commercial access management product).
Su (Unix)-like support for switching principal identity over a HTTP or HTTPS connection.
Run-as replacement, which enables an operation to assume a different security identity.
Anonymous authentication, which means that even unauthenticated principals are allocated a security identity.
Container adapter (custom realm) support for Apache Tomcat, Resin, JBoss and Jetty (web server).
Windows NTLM to enable browser integration (experimental).
Web form authentication, similar to the servlet container specification.
"Remember-me" support via HTTP cookies.
Concurrent session support, which limits the number of simultaneous logins permitted by a principal.
Full support for customization and plugging in custom authentication implementations.
Key authorization features
AspectJ method invocation authorization.
HTTP authorization of web request URLs using a choice of Apache Ant paths or regular expressions.
Instance-based security features
Used for specifying access control lists applicable to domain objects.
Spring Security offers a repository for storing, retrieving, and modifying ACLs in a database.
Authorization features are provided to enforce policies before and after method invocations.
Other features
Software localization so user interface messages can be in any language.
Channel security, to automatically switch between HTTP and HTTPS upon meeting particular rules.
Caching in all database-touching areas of the framework.
Publishing of messages to facilitate event-driven programming.
Support for performing integration testing via JUnit.
Spring Security itself has comprehensive JUnit isolation tests.
Several sample applications, detailed JavaDocs and a reference guide.
Web framework independence.
Releases
2.0.0 (April 2008)
3.0.0 (December 2009)
3.1.0 (December 7, 2011)
3.1.2 (August 10, 2012)
3.2.0 (December 16, 2013)
4.0.0 (March 26, 2015)
4.1.3 (August 24, 2016)
4.2.0 (November 10, 2016)
3.2.10, 4.1.4, 4.2.1 (December 22, 2016)
4.2.2 (March 2, 2017)
4.2.3 (June 8, 2017)
5.0.0 (November 28, 2017)
5.0.8, 4.2.8 (September 11, 2018)
5.1.0 GA (September 27, 2018)
5.1.1, 5.0.9, 4.2.9 (October 16, 2018)
5.1.2, 5.0.10, 4.2.10 (November 29, 2018)
5.1.3, 5.0.11, 4.2.11 (January 11, 2019)
5.1.4 (February 14, 2019)
5.1.5, 5.0.12, 4.2.12 (April 3, 2019)
References
External links
Java enterprise platform
Computer access control |
56823793 | https://en.wikipedia.org/wiki/Remarkable%20%28tablet%29 | Remarkable (tablet) | Remarkable (styled as reMarkable) is an E Ink writing tablet for reading documents and textbooks, sketching and note-taking with the goal of a paper-like writing experience. Developed by a Norwegian startup company of the same name, the device is geared towards students, academics, and professionals.
The reMarkable blends the reading experience of an electronic paper display with the writing experience of a high end tablet computer through its low lag Linux operating system.
History
The company was founded by Magnus Wanberg and started product development in Oslo in early 2014. It has collaborated with Taiwanese company E Ink. Development was started in 2013 and a crowdfunding campaign launched in late 2016. Pre-orders began in 2017.
Second generation reMarkable 2 was announced on March 17, 2020. It was marketed as the 'World's Thinnest Tablet' (measuring 187 x 246 x 4.7 mm) and sold in batches since mid-2020 for 458 €/US$ including the pen.
Operating system
ReMarkable uses its own operating system, named Codex. Codex is based on Linux and optimized for electronic paper display technology.
Reception
Remarkable RM100, launched in late 2017, has been criticized due to the sluggishness when loading and unloading files. According to Wired, reMarkable 2 "excels at taking your handwritten notes, but it doesn't do much else well." According to the podcast Bad Voltage, the lack of integrations with other software limits the device's usefulness for taking notes for some users, and there is no official third-party app ecosystem, but the ability to add software via unofficial hacks offers interesting possibilities.
See also
Comparison of e-readers
Sony Digital Paper DPTS1
Boox
PocketBook International
References
External links
Official webpage of reMarkable | The paper tablet
reMarkableWiki - Everything about the reMarkable Paper Tablet (Community Wiki)
A curated list of projects related to the reMarkable tablet
Dedicated e-book devices
Electronic paper technology
Linux-based devices
Electronics companies established in 2016
Crowdfunded consumer goods |
33079593 | https://en.wikipedia.org/wiki/Organizational%20information%20theory | Organizational information theory | Organizational Information Theory (OIT) is a communication theory, developed by Karl Weick, offering systemic insight into the processing and exchange of information within organizations and among its members. Unlike the past structure-centered theory, OIT focuses on the process of organizing in dynamic, information-rich environments. Given that, it contents that the main activity of organizations is the process of making sense of equivocal information. Organizational members are instrumental to reduce equivocality and achieve sensemaking through some strategies — enactment, selection, and retention of information. With a framework that is interdisciplinary in nature, organizational information theory's desire to eliminate both ambiguity and complexity from workplace messaging builds upon earlier findings from general systems theory and phenomenology.
Inspiration and influence of pre-existing theories
1. General Systems Theory
The General Systems Theory, on its most basic premise, describes the phenomenon of a cohesive group of interrelated parts. When one part of the system is changed or affected, it will affect the system as a whole. Weick uses this theoretical framework from 1950 to influence his organizational information theory. Likewise, organizations can be viewed as a system of related parts that work together towards a common goal or vision. Applying this to Weick's organizational information theory, organizations must work to reduce ambiguity and complexity in the workplace to maximize cohesiveness and efficiency. Weick uses the term, coupling, to describe how organizations, like a system, can be composed of interrelated and dependent parts. Coupling looks at the relationship between people and work.
There are two types of coupling:
1. Loose coupling
Loose coupling describes that while people within the organization or system are connected and often work together, they do not depend on one another to continue or fully complete individual work. The dependencies are weak and workflow is flexible. For example, "if the whole Science department completely shuts down because all of teachers are sick or for whatsoever reason, the school can still continue to operate because other departments are still present."
2. Tight coupling
Tight coupling describes when connections within an organization are strong and dependent. If one part of the organization is not operating correctly, the organization as a whole cannot continue to their fullest potential. " For instance, the format and ink section completely shuts down hence the succeeding steps cannot be continued, so the whole process of the organization will be dropped. Thus, components of a system are directly dependent on one another."
2. Theory of evolution
The theory of evolution, by Charles Darwin, is a framework for survival of the fittest. According to Darwin, organisms attempt to adapt and live in an unforgiving environment. Those that are unsuccessful in adaptation do not survive, while the strong organisms continue to thrive and reproduce. Weick invokes inspiration from Darwin, to incorporate a biological perspective to his theory. It is natural for organizations to have to adapt to incoming information that often interfere with the preexisting environment. Organizations that are able to plan and alter strategies in accordance with their constant need of organizing and sense making, will survive and be the most successful. However, there is a notable difference between animal evolution and survival of the fittest in organizations, "A given animal is what it is; variation comes through mutation. But the nature of an organization can change when its members alter their behavior."
Assumptions
1. Human organizations exist in an information environment
Unlike senders and receivers models, OIT stands on the situational perspective. Karl Weick views a human organization as an open social system. People in that system develop a mechanism to establish goals, obtain and process information, or perceive the environment. In this process, people and the environment come to conclusions on "what's going on here?". Colville believes that this attributional process is retrospective.
Take an education institutions as an example. A university can obtain information regarding students' needs in numerous ways. It might create feedback section in its website. It could organize alumni panels or academic affairs to attract prospective students and collect concrete questions they are interested in. It may also conduct the survey or host focus group to get the information. After that, the stuff of the university have to decide how to deal with these information, based on which, it has to set and accomplish its goals for current and prospective students.
2. The information an organization receives differs in terms of equivocality
Weick's posits that numerous feasible interpretations of reality exist when organizations process information. Their varying levels of understandability lead to different outcomes of information inputs. In other academic works, scholars tend to say that messages are uncertain or ambiguous. While according to OIT, messages are described to be equivocal. believes that people proactively exclude a number of possibilities to perceive what is going on in the environment. Due to OIT's situational perspective, the meanings of messages consist of the messages, the interpretations of receivers, and the interactional context. However, ambiguity and uncertainty can mean that a standard answer - the only one true objective interpretation - exists.
Also, Weick emphasizes that "the equivocality is the engine that motivates people to organize". Maitlis and Christianson states that the equivocality trigger sensemaking for three reasons: environment jolts and organizational crises, threats to identity, and planned change interventions.
3. Human organizations engage in information processing to reduce equivocality of information
Based upon the first two assumption, OIT proposes that information processing within organizations is a social activity. Sharing is the key feature of organizational information processing. In that particular context, members jointly make sense the reality by reducing equivocality. It other words, the sensemaking is a joint responsibility which includes numerous interdependent people to accomplish. In this process, organizations and its members combine actions and attributions together in order to find the balance between the complexity of thoughts and the simplicity of actions. Weick also proposes that people create their own environment though enactment, which is the action of making sense. This is because people have different perceptual schemas and selective perception, so people create different information environments. In creating different information environments, people can arrive at the same or close to the same understanding or solution through different thought processes and overall understanding.
Key concepts
The organization
In order to place Weick's vision regarding Organizational Information Theory into proper working context, exploring his view regarding what constitutes the organization and how its individuals embody that construct might yield significant insights.
From a fundamental standpoint, he shared a belief that organizational validation is derived---not through bricks and mortar, or locale—but from a series of events which enable entities to "collect, manage and use the information they receive." In elaborating further on what constitutes an organization during early writings outlining OIT, Weick said, "The word organization is a noun and it is also a myth. if one looks for an organization, one will not find it. What will be found is that there are events linked together, that transpire within concrete walls and these sequences, their pathways, their timing, are the forms we erroneously make into substances when we talk about an organization".
When viewed in this modular fashion, the organization meets Weick's theoretical vision by encompassing parameters that are less bound by concrete, wood, and structural restraints and more by an ability to serve as a repository where information can be consistently and effectively channeled. Taking these defining characteristics into account, proper channel execution relies on maximization of messaging clarity, context, delivery and evolution through any system.
One example as to how these interactions might unfold on a more granular level within these confines can be gleaned through Weick's double interact loop, which he considers the "building blocks of every organization". Simply put, double interacts describe interpersonal exchanges that, inherently, occur across the organizational chain of command and in life, itself.
Thus:
"An act occurs when you say something (Can I have a Popsicle?).
An interact occurs when you say something and I respond ("No, it will spoil your dinner).
A double interact occurs when you say something, I respond to that, then you respond to that, adjusting the first statement ("Well, how about half a Popsicle?)
Weick envisions the organization as a system taking in equivocal information from its environment, trying to make sense of that information, and using what was learned for the future. As such, organizations evolve as they make sense out of themselves and the environment".
"These communication cycles are the reason Weick focuses more on relationships within an organization than he does on an individual's talent or performance. He believes that many outside consultants gloss over the importance of the double interact because they depart the scene before the effects of their recommended action bounce back to affect the actor". By allowing us to consider the organization in this alternative framework, Organizational Information Theory provides us with a robust platform from which to explore the communication process, literally, as it unfolds.
It is important to note that the flow of equivocal information for organizations is constant and ongoing. Organizing, the process of making sense out of information, is a continuous cycle for organizations. Because of this, Weick prefers to use verbs when describing organizations. Nouns give off a stationary and fixed connotation. For example, instead of the word "management", Weick would prefer to use the verb "managing". By using verbs, it advocates and reflects the fluidity of the sensemaking process, which is changing, as opposed to using nouns; as nouns reflect stationary or fixed entities, which is against what Weick is proposing.
Loose coupling and the information environment
In developing Organizational Information Theory, Weick took a "social psychological stance that notes that individual behavior is more a function of the situation than of personal traits or role definitions. Therefore, people are 'loosely connected' in most organizations and have a large latitude for action". As a way of formalizing this phenomenon, he "invites us to use the metaphor "loose coupling" in order to better understand organizations and aspects of organizations --particularly the variant kinds of connections that exist within organizations--that are either marginalized, ignored, or suppressed by normative bureaucracy".
So, in much the same way he suggested that organizations be viewed through a non-traditional lens in structure, he acknowledges that, by doing so, one may have to consider circumstances where "several means can produce the same result, while offering the appearance that lack of coordination, absence of regulations, and very slow feedback times are the norm".
While many might view these nuances as roadblocks or impediments to progress, Organizational Information Theory views each one as a catalyst for improved performance and positive change through: "increased sensitivity to a shifting environment, room for adaptation and creative solutions to develop, sub-system breakdown without damaging the entire organization, persistence through rapid environmental fluctuations and fostering an attitude where self-determination by the actors is key".
Another overriding component of Weick's approach is that information afforded by the organization's environment---including the culture within the organizational environment itself---can impact the behaviors and interpretation of behaviors of those within the organization. Thus, creation of organizational knowledge is impacted by each person's personal schema as well as the backdrop of the organization's objectives. The organization must sift through the available information to filter out the valuable from the extraneous. Additionally, the organization must both interpret the information and coordinate that information to "make it meaningful for the members of the organization and its goals." In order to construct meaning from these messages in their environment, the organization must reduce equivocality, while committing to an interpretation of the message which matches its culture and overall mission.
Accordingly, the "flashlight analogy" is used to explain the inseparability of action and knowledge present in this theory. One should imagine he is in a dark field at night with only a flashlight. He can vaguely pick out objects around him, but can't really tell what they are. Is that lump in the distance a bush or a dangerous animal? When he turns on his flashlight, however, he creates a circle of light that allows him to see clearly and act with relative clarity. The act of turning on the flashlight effectively created a new environment that allowed him to interpret the world around him. There is still only a single circle of light, though, and what remains outside that circle is still just as mysterious, unless the flashlight is redirected. With organizational information theory, the flashlight is mental. The environment is located in the mind of the actor and is imposed on him by his experiences, which makes them more meaningful.
Equivocality
Based on the number of rapidly moving parts within any organization (i.e., information flows, individuals, etc....) the foundation upon which messaging is received constantly shifts, thus leaving room for unintended consequences relative to true intent and meaning. Equivocality arises when communication outreach "can be given different interpretations because their substance is ambiguous, conflicted, obscure, or introduces uncertainty into a situation".
Organizational Information Theory provides a knowledge base and framework which can help mitigate these risks through by decreasing the level of ambiguity present during relevant communication activities. Simultaneously, it serves as a construct whose potential for growth stems from active use "communicating and organizing" and "reducing the amount of equivocality" within a specified domain.
In looking at how equivocality evolves more closely, it can also manifest itself as a signature for highly interpretive events, along with those where the parameters (and uncertainty levels) are, traditionally, much more concrete. For instance, "equivocality also describes situations where there is agreement on a set of descriptive criteria (say, desirable market/undesirable market) but disagreement on either their boundaries (i.e., the point at which markets go from being desirable to undesirable) or on their application to a particular situation (whether a particular market is desirable or undesirable). Managing equivocality requires coordinating meaning among members of an organization, and is an essential part of organizing. Equivocality arises because everyone's experiences are unique; individuals and communities develop their own sets of values and beliefs and tend to interpret events differently. Equivocality also may result from unreliable or conflicting information sources, noisy communication channels, differing or ambiguous goals and preferences, vague roles and responsibilities, or disparate political interests".
Sensemaking
Karl Weick's Organizational Information Theory views organizations as " 'sensemaking systems' which incessantly create and re-create conceptions of themselves and of all around them".
From a less clinical (and more intuitive) perspective, Weick and his collaborator, Kathleen M. Sutcliffe, jointly describe sensemaking as an action which "involves turning circumstances into a situation that is comprehended explicitly in words or speech and that serves as a springboard to action".
In its more defined organizational context, sensemaking can be looked at as a process "that is applied to both individuals and groups who are faced with new information that is inconsistent with their prior beliefs". In factoring the uneasiness (or cognitive dissonance) that results from this experience, they will create narratives to fit the story which serve both as a buffer and a guiding light for further renditions of the story. "This explains how, for example, religious groups can have such stringent beliefs, how political parties can be confident in their diametrically-opposed positions, how organizations can develop very different cultures, and how individuals can develop very different interpretations for the same event".
The process of sensemaking usually starts with a circumstance or problem which requires a certain level of interpretation by others (i.e., something did or did not happen). Whether it is consciously or unconsciously driven, those involved then make a commitment to a perceived viewpoint surrounding those facts. "Commitment forms around the interpretation to bind the interpretation to future action. When publicly communicated, commitment is especially strong. Individuals are motivated to justify their commitments, so they initiate future actions and continually refine their interpretation of the original event so that their commitment to a course of action is deemed appropriate. These new actions produce "evidence" that validates the interpretation and are used to increase decision confidence".
These are critical facets which surround the sensemaking process:
a) sensemaking starts with noticing and bracketing
b) sensemaking is about labeling
c) sensemaking is retrospective
d) sensemaking is about presumption
e) sensemaking is social and systemic
f) sensemaking is about action
g) sensemaking is about organizing through communication
The idea of sensemaking is also a theme within Organizational Information Theory. Organizational sensemaking contrasts with organizational interpretation. When an organization interprets information, there is already a frame of reference in place and this is enough information for an organization to change course. Sensemaking occurs, however, when no initial frame of reference exists and no obvious connection presents itself. According to Weick, sensemaking can be driven by beliefs or actions. Beliefs shape what people experience and give form for the actions they take. For example, disagreement about beliefs in an organization can lead to arguments. This is a form of sensemaking.
Notably, sensemaking make impact on organizations in three aspects: strategic change, organizational learning, and innovation and creativity. Regarding strategic change, individuals are triggered to alter their own roles and behaviors and also help others to coordinate with their new changes. Then a new organizational order about strategies will be constructed. As for learning, on the one hand, people will learn from error. Organizational understandings and routines will be revised, updated and strengthened in response to the errors. On the other hand, sensemaking about material gathered and available options make a great contribution to learn in more conventional contexts, especially in knowledge-intensive work settings. The details about its impact on innovation can be seen in "extension" section.
Choice points, behavior cycles and assembly rules
When information messaging remains an unclear variable, organizations will usually revert to a number of Organizational Information Theory-based methodologies which are designed to encourage ambiguity reduction:
1. Choice points--Describes an organization's decision to ask: "should we attend to some aspect of our environment that was rejected before?" Re-tracing one's steps can provide both management and individuals with a comfort zone in addressing frequency and volume regarding messaging, lest anything have been missed.
2. Behavior/communication cycles--Represents "deliberate communication activities on the part of an organization to decrease levels of ambiguity". Importantly, degrees of messaging equivocality have a direct impact on how many cycles are required to alleviate its effects. Within this realm, three distinct steps emerge that are each focused on providing messaging clarity: act, response and adjustment. Each is designed to facilitate the retention and selection process. Act occurs when it is communicated that unclear or equivocal information is present. Response is the effort to help reduce the uncertain information. Lastly, adjustment happens when the behavior or information evaluation is changed or adjusted. Many times, this cycle has to be repeated. This is because equivocal information and communication cycles have a positive correlation: The larger amount of complex information there is, the greater need for several communication cycles. Griffin et al. (2015) relates a communication cycle to a wet towel by saying, "just as a twist of a wet towel squeezes out water, each communication cycle squeezes equivocality out of the situation." Examples of behavior cycles include staff meetings, coffee-break rumoring, e-mail conversations, internal reports, etc..
3. Assembly rules--Signifies a broader construct, "which may include evaluating how standard operating procedures (SOP) are carried out, along with chain-of-command designations". By its nature, this approach explores protocol measures that might be effective in handling ambiguity, as well as, how related processes might unfold. These are rules that have served well in the past and have therefore become standard in the organization. Examples of assembly rules include a manual or handbook. Generally, assembly rules are used when the level of equivocal information is low.
Strategies
The principles of equivocality
Three critical principles, or relationship, guide the process of equivocality reduction. They are: the relationship among equivocality, the rules and the cycles must be carefully analyzed; the relationship between the number of rules and amount of cycles; the relationship between the number of cycles and the amount of equivocality.
To be specific, Weick posits that the amount of perceived equivocality influence the number of rules. Generally speaking, there is a negative correlation between the level of equivocality and the amount of rules. To be specific, the more equivocal the message is, the few rules are available to process that information. Meanwhile, an inverse relationship also exists between rules and cycles. In other words, fewer rules lead to more use of cycles. Then, the increasing number of cycles used can reduce the equivocality. By using a higher number of cycles, this theory can be used as a rubric from which shared sensemaking can be accomplished in process organizing. This rubric sets the layout for how thought processes are changing and will be modified based on new information, different environments (work, social, living) and by the different people you are around and use assembly cycles with.
Stages of equivocality reduction
According to Weick, organizations experience continuous change and are ever-adapting, as opposed to a change followed by a period of stagnancy. Building off of Orlikowski’s idea that the changes that take place are not necessarily planned, but rather inevitably occur over time, Organizational Information Theory explains how organizations use information found within the environment to interpret and adjust to change. In the event that the information available in the information environment is highly equivocal, the organization engages in a series of cycles that serve as a means to reduce uncertainty about the message. A highly equivocal message might require several iterations of the behavior cycles. An inverse relationship exists between the number of rules established by the organization to reduce equivocality and the number of cycles necessary to reduce equivocality. Similarly, the more cycles used, the less equivocality remains.
Enactment
Weick emphasizes the role of action, or enactment in change within an organization. Through a combination of individuals with existing data and external knowledge, and through iterative process of trial and error, ideas are refined until they become actualized. Enactment also plays a key role in the idea of sensemaking, the process by which people give meaning to experience. Essentially, the action helps to define the meaning, making those within the organization's environment responsible for the environment itself.
Selection
Upon analyzing the information the organization possesses, the selection stage includes evaluation of outstanding information necessary to further reduce equivocality. The organization must decide the best method for obtaining the remaining information. Generally, the decision-makers of the organization play a key role in this stage.
There are three critical processes happening in this stage: 1) members make a choice among interpretations; 2) members choose the type and amount of rules for processing those interpretations; 3) communication cycles start to work on those interpretations.
Retention
The final stage occurs when the organization sifts through the information it has compiled in attempts to adapt to change, and determines which information is beneficial and worth utilizing again. Inefficient, superfluous and otherwise unnecessary information that do not contribute to the completion of the project or reduction of equivocality will most likely not be retained for future application of similar project.
Applications
Application in health care
One of the key real-world applications regarding Weick's concept of Organizational Information Theory can be found in healthcare. There, he went so far as to personally develop a dedicated health communications approach which "emphasizes the central role of communication and information processing within social groups and institutions". Specifically, Weick's work draws correlations between accuracy of information and the ability of organizations to adapt to change.
Weick's model of organizing plays a powerful role in improving communication of health care and health promotion. The OIT enables consumers and providers to reduce equivocality when they face complex health care and health promotion situations. "In health care and health promotion, enactment processes are used to make sense of different health-related challenges, selection processes are used to choose different courses of action in response to these challenges, and retention processes are used to preserve what was learned from enactment and selection processes for guiding future health care/promotion activities".
For instance, the theory can evaluate the problems of excessive nurse turnover in public hospital and develop interventions to address the problems. Hospital administrators used to deal with the problem by making efforts in recruiting nurses. Although the strategy attracted more new nurses, it was expensive to maintain the recruitment efforts. Thus, a retention program was generated under the Weick's model of organizing. The program used questionnaires, in-depth interviews and focus group discussion to figure out nurses' concerns (enactment). The research's results identified strategies to solve those problems (selection). Then the program gathered further information about nurses' attitude and advice for these strategies and implemented refined strategies (retention).
Application in education
Some scholars advocate that loosely coupled system and garbage can model guarantee the flexibility of higher education organizations. Proponents of loose coupling system believe that the university's academic freedom and students' individual identity will be destroyed if administrators tighten up the loose coupling. However, Weick argues that the "unpredictability (of an organization) is insufficient evidence for concluding that the elements in a system are loosely coupled". Other scholars notice the Weick's warning that loose coupling should not be used as a normative model. Universities will not lose their academic freedom with a tighter coupled system. Frank W. states that "They (universities) are tightly coupled in some aspects and uncoupled in other aspects. Tight coupling occurs when an issue supports the status quo. Uncoupling occurs when an issue challenges the status quo".
Weick's model of organizing can be applied to reduce equivocality in the large-lecture classroom and to increase students' engagement. Large-lecture classroom can be recognized as an information environment with various degrees of equivocality. Students enact assembly rules to make sense of messages in class with low equivocality. Behavior cycles which focus on act, response and adjustment can be utilized by students to clarify messages with high equivocality. "Students assess how the applied rules and cycles affected their ability to interpret the original input's equivocality and decide if additional rules and cycles are needed to develop an effective response to the input". Some students feel intimidated when they raise questions in the large-lecture classroom. Thus, the synchronicity and anonymous nature of microblog make itself a second channel to facilitate students' question asking as well as decreases their equivocality. Faculty enables to retain organizational intelligence through microblog format.
Application in conflict management
It is difficult to find two parties which share the exact same interest. Thus, conflict and cooperation coexist with each other in organizations. Institutionalized conflict management is frequently used by managers to create sustained organizations. Metaphors provides a comprehensive approach to understand and interpret the information environment which includes new knowledge and new practices. "Metaphor created by subsidiary representatives of the conflict management practice reflects the quality and the depth of institutionalization". Metaphors can be recognized as a collective sensemaking and a depict of organizational environment. Individuals are able to make decisions which depends on their metaphors about conflict in organizations.
Critiques
Utility
This theory focuses on the process of communication instead of the role of individual actors. It examines the complexities of information processing in lieu of trying to understand people within a group or organization. Additionally, this theory closely examines the act of organizing, rather than organizations themselves. Weick defines organizing as, "the resolving of equivocality in an enacted environment by means of interlocked behaviors embedded in conditionally related process" and that, "human beings organize primarily to help them reduce the information uncertainty in their lives".
Logical consistency
Some scholars argue that this theory fails the test of logical consistency and that people are not necessarily guided by rules in an organization. Some organizational members might not have any interest in communication rules and their actions might have more to do with intuition than anything else.
Other critics posit that organizational information theory views the organization as a static entity, rather than one that changes over time. Dynamic adjustments, such as downsizing, outsourcing and even advancements in technology should be taken into consideration when examining an organization—and organizational information theory does not account for this.
Critics of this theory assert that it does not deal significantly with hierarchy or conflict, two prominent themes associated with organizational communications. In some cases, the hierarchical context makes difficulties for sensemaking and proposes a flow of downward negative feedback. Sensemaking process can be applied to explain why employees remain silence in the organization. Two sensemaking resources which are expectation and identity preclude employees from giving upward negative feedback. Employees expect that their negative feedback for supervisors will pose threat to their job security or might be neglected by supervisors. Besides, employees make sense of their own understanding and identify themselves as deficient experts who are unable to make best decisions.
Scope
The theory also neglects the larger social, historical and institutional context. The role of the institutional context and cultural-cognitive institutions in sensemaking should be paid more attention to. According to Taylor and Van Every, "what is missing [in Weick’s 1995 Sensemaking in Organizations version of enactment] ... is an understanding of the organization as a communicational construction or an awareness of the institutionalizing of human society that accompanies organization with its many internal contradictions and tensions". Actors are constrained by cognition through socialization in the job, school system and the media when they do sensemaking in institutions. It causes less variety and more stability in institutions. In order to expand the theory, social mechanism can be applied to consider how institutions prime, edit and trigger sensemaking besides the traditional cognitive constraint.
Extension of OIT
Dr. Brenda Dervin: the Sense-Making Approach
As an adjunct to Weick's work regarding organizational information, noted academician (and fellow researcher), Dr. Brenda Dervin, followed a similar path in exploring how ambiguity and uncertainty are handled across platforms.
However, in broaching these issues from a more communication-driven perspective Dr. Dervin found that these issues evolve from a different place; one, in fact, that unlike Weick, assumes "discontinuity between entities, times and spaces". Instead of modularity, "each individual is an entity moving through time and space, dealing with other entities which include other people, artifacts, systems, or institutions. The individual's making of sense as a strategy for bridging these gaps is the central metaphor of the Sense-Making Approach".
By utilizing this alternate prism, Dr. Dervin found that "patterns of gap-bridging behavior are better-predicted by the way individuals define the gaps in which they find themselves, than by any attributes that might be typically used to define individuals across space and time, such as demographic categories or personality indicators. Situations and people are constantly changing, but patterns of interaction between people and situations as they are defined by people seem to be somewhat more stable".
Organizational Information Theory and innovation
"In 2000, the researchers Doughtery, Borrelli, Munir and O'Sullivan discovered that the level of innovation capability within organizations was connected to the ability of making the right sense of collective experiences in uncertain or ambiguous situations such as radical changes in the market or technology paradigm shifts".
Instinctively, it seems, the sensemaking systems of less innovative companies appeared to be inhibited by tendencies to "play within existing rules...filter out unexpected information or unorthodox competency sets, resulting in groupthink and more of the same". Meanwhile, "innovative organizations ...dared to challenge existing business logic and made use of new insights"...these, in turn, were used to "change existing interpretations and schemes and thus, influence the overall evolvement of the organization so that the interpretation schemes themselves became more apt to deal with outside forces".
Weick proposes three dynamics which are closely relevant to innovation: "that the knowledge is both tacit and articulated, that linking processes cut across several levels of organizational action, and that linking processes embody several tensions". A study applies these dynamics and organizational sensemaking to analyze how people make sense of market and technology knowledge for product innovation. Innovative organizations have a system of sensemaking that allows actors to "construct, bracket, interpret, and rethink the right kinds of market and technology knowledge in the right way for innovation". However, actors in non-innovative organizations make sense of knowledge in a separate way.
See also:
Computer-mediated communication (CMC) and sensemaking
Computer-mediated communication has gradually become an essential part of communication in current work context and organizational settings. Weick proposes that sensemaking away from terminals experiences five procedures: effectuate, triangulate, affiliate, deliberate, and consolidate.
According to Garphart, compared with in traditional workplace, sensemaking in electronically enabled work place becomes more complex and ambiguous. and act differently in the above five procedures. To be specific. four distinct features can be seen. Firstly, the process of reciprocity is much confined. The reciprocity of perspectives might be delayed when communicators are at different "terminals". Also, sensory cues are nearly absent in this process, while textual communication is commonly used. In other words, communication modes are limited. Secondly, the levels of equivocality and the difficulties of sensemaking increase. The problems of understanding computer or new technical terms emerge as a new obstacle to be overcome for achieving sensemaking in organizations. Thirdly, less etcetera principles can be used to in CMC. In other words, CMC tends to build up a rigidly structured communication environment, due to which, people will find it harder to gain additional information from outside the computer system. Lastly, CMC alters the occupational tasks. Some technical background or computer-related knowledge are required to support general occupational tasks.
There are two major models proposed to explain the sensemaking with CMC in organizations: Weick's cognitive model and a social interaction model.
Weick believes that people will increasingly encounter with problems to make sense information with new technology. Cognitive Model posits states that new computer technologies have a huge impact on sensemaking in organizations in five aspects. First, actions deficiencies always happen because real objects can not be fully captured and represented by simulated images and symbols. Second, comparisons are often deficient due to limited reciprocity of perspectives and nearly absent feedback based on direct action. Third, works at terminals are always solitary and will decrease affiliation of communication. Fourth, constant inflow of information in computer interrupt or even prevent the deliberation. Lastly, sensemaking at terminals always leads to consolidation deficiencies due to self-contained nature of computer.
Social Interactional Model, deriving from phenomenological sociology and ethnomethodology, emphasizes the intersubjective and objective features of sensemaking. Intersubjective feature reveals that people always try to obtain subjective meanings of others. In CMC, communicators are required to actively figure out whether they share meanings with others, including computers. In this process, people are expected to use normal form objects, terms and utterances to portray correctly the contexts and experiences in order to let computer recognize. However, increasing number of knowledge are needed to achieve sensemaking than that in traditional workplace. Thus, etcetera principle come into use to deal with vague or implicit meanings. People will actively seek additional information to clarify the meanings. Meanwhile, descriptive vocabularies are used as indexical expressions to assist people to resolve equivocal meanings based on contextual, cultural, technical knowledge and information.
See also
Collaborative information seeking
Computer-supported collaborative learning
Computer-supported collaboration
Diffusion of innovation theory
Organizational learning theory
Sociological theory of diffusion
Uncertainty reduction theory
References
Further reading
Jay R. Galbraith, Organization Design Reading, Massachusetts: Addison-Wesley Publishing Company, 1977.
Karl E Weick, "The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster." Administrative Science Quarterly, 38, No. 4 (1993): 628-652.
Karl E. Weick, Making Sense of the Organization. Malden: Blackwell Publishing Ltd, 2004.
Karl E. Weick, Sensemaking in Organizations. London: SAGE Publication, Inc., 1995.
Karl E. Weick and Susan J. Ashford, (2001) "Learning in Organizations". In Frederic M. Jablin and Linda L. Putnam (Ed.) The New Handbook of Organizational Communication: Advances in Theory, Research, and Methods. pp. 704–731. London: Sage Publications, Inc.
Communication theory
Information science
Organizational theory |
34998367 | https://en.wikipedia.org/wiki/Greensburg%20Red%20Wings | Greensburg Red Wings | The Greensburg Red Wings were a Class D Minor League Baseball team based in Greensburg, Pennsylvania. The team was a member of the Pennsylvania State Association, from - and played all of its home games at Offutt Field. The team's name often changed throughout their short existence. They began as the Greensburg Trojans, an affiliate of the St. Louis Cardinals. A year later, in , the team was renamed the Greensburg Red Wings. However, in when the Brooklyn Dodgers took over the team, they were renamed the Greensburg Green Sox. Finally, the team was called the Greensburg Senators, after their final affiliate, the Washington Senators, in 1939.
Notable moments
In the summer of 1936, the Major League Baseball's St. Louis Cardinals, behind Pepper Martin, defeated the Greensburg Red Wings, 11–0, in front of 1,500 spectators at Offutt Field. In 1937, the Greensburg Green Sox was instrumental in getting funds for lights at Offutt Field in the city, setting the stage for night high school football, which debuted that fall. The field hosted minor league teams that were affiliated with the Cardinals, Washington Senators, and Brooklyn Dodgers
Major League alumni
Johnny Blatnik (1939 Senators)
Pete Center (1934 Trojans/1935 Red Wings)
Joe Cleary (1938 Green Sox)
Pat Cooper (1936 Red Wings)
Red Davis (1935 Red Wings)
Stan Ferens (1937 Green Sox/1939 Senators)
Nick Goulish (1938 Green Sox)
Otto Huber (1936 Red Wings)
Ken Holcombe (1938 Green Sox)
Eddie Lopat (1937 Green Sox)
Rube Melton (1936 Red Wings)
Heinie Mueller (1935 Red Wings)
Eddie Morgan (1934 Trojans)
Lynn Myers (1934 Trojans)
Bob Scheffing (1935 Red Wings)
Lou Scoffic (1934 Trojans)
Bud Souchock (1939 Senators)
Tom Sunkel (1934 Trojans)
Season-by-season
(from Trojans Baseball Reference Bullpen)
(from Red Wings Baseball Reference Bullpen)
(from Green Sox Baseball Reference Bullpen)
(from Senators Baseball Reference Bullpen)
References
External links
Greensburg Trojans/Red Wings/Green Sox/Senators at BaseballReference.com
Baseball teams established in 1934
Defunct minor league baseball teams
Pennsylvania State Association teams
Greensburg, Pennsylvania
Brooklyn Dodgers minor league affiliates
Washington Senators minor league affiliates
St. Louis Cardinals minor league affiliates
1934 establishments in Pennsylvania
1939 disestablishments in Pennsylvania
Sports clubs disestablished in 1939
Defunct baseball teams in Pennsylvania
Baseball teams disestablished in 1939 |
4398814 | https://en.wikipedia.org/wiki/Ecco%20Pro | Ecco Pro | Ecco Pro was a personal information manager software based on an outliner, and supporting folders similar to spreadsheet columns that allow filtering and sorting of information based upon user defined criteria.
The software was originally produced by Arabesque Software in 1993, then purchased by NetManage, and discontinued in 1997.
Overview
The product offers three primary types of views – phone book views, calendar views, and notepad views. Central to the program's design is an outlining structure and the ability to easily manipulate information regardless of in which view it was entered. Multiple notepad, calendar, and phonebook views can be opened, and each item seen in each view can be a collapsible outline, with each line assignable to folders/categories which can themselves be their own views, text field, pulldown menu, calendar date (including repeating date), or phonebook entry.
Product functionality
ECCO Professional was introduced by Arabesque Software in 1993, as a Personal Information Manager (PIM) with a database backend. This version supports calendar and contact data, as well as to-do lists, and allows integration with other software via import and export capability, Dynamic Data Exchange (DDE), Object Linking and Embedding (OLE). A feature called "Shooter" puts a cut and paste tool at the top of the screen facilitating copy of data to and from ECCO. The user interface is based on a "universal outliner" and folders, which allow the user to build a variety of views organizing related information of mixed types. Data is stored as discrete objects, and can be dragged as dynamic links to multiple folders creating cross references. Ecco version 1.x supports shared folders and outlines for network access to data, but does not support windows workgroups. Ram based, the program was considered fast and relatively easy on laptop batteries, but a heavy consumer of system resources.
ECCO version 2.0, released in 1994, added support for workgroups, including group scheduling via email systems compliant with MAPI or VIM protocols, and Microsoft Schedule Plus, and sharing of contacts, calendars, and outlines, as well as file synchronization and reconciliation via intranet connections or email. In 1995 PC Magazine praised ECCO as a workgroup tool for scheduling and task management and noted its ability to handle free form data, but considered version 2.0 a "poor choice as a contact manager" which requires customization to match features of contemporary products, and lacks structured and complex search queries, good reporting, logging and correspondence functions.
ECCO version 3.0 was released in the summer of 1995 with an updated user interface based on a ring binder. Other additions include an Internet launch tool equipped with an address book containing links to over 2,000 sites. Internet support for the Shooter tool allows the user to push a URL and title for a web page back to ECCO. Searching improved with a query tool based on forms and support for boolean filters.
ECCO Pro version 4.0 added 32 bit support and OLE 2.0. as well as integration with NetManage's Chameleon and Z-Mail. Version 4.01 has support for Palm Pilot.
History
Ecco Pro was originally developed by Pete Polash, who had sold an early Macintosh based presentation program to Aldus and Bob Perez, a Harvard-trained lawyer hired by Apple as a programmer and Evangelist in the 1980s. It was first released in 1993 by Arabesque Software, Inc., based in Bellevue, Washington. PC Magazine awarded ECCO Pro their Editor's Choice award in 1996 and 1997
Development by NetManage ceased in 1997 after the July 1997 release of version 4.01. Andrew Brown wrote in The Guardian:
"So what happened to the paragon of a program? The market killed it. First it was sold to a much larger company, Netmanage; presumably doing this made the original programmers a lot of money. Then Netmanage panicked when Microsoft Outlook came along as a "free" part of the Office suite, and killed development on the program." NetManage chief executive officer Zvi Alon noted that 'As soon as Microsoft decided to give away Outlook with Office, we started getting phone calls questioning the value of Ecco Pro'.
Even though the source code for Ecco Pro is not open source, development of plugin extensions to the software continues. According to Scott Rosenberg, a programmer using the handle "slangmgh" developed an extension to Ecco Pro posted to ecco_pro users group on Yahoo which includes fixes and upgrades to the program, and may incorporate the Lua scripting language.
The EccoPro to Android Synchronization Software MyPhoneExplorer able to synchronize Contacts, Calendar, Tasks and Ecco Outlines to Android Phones and Tablets was released on 9 July 2013.
References
External links
Ecco Pro users' group
Personal information managers
Outliners
Note-taking software
Calendaring software
Micro Focus International |
29157021 | https://en.wikipedia.org/wiki/Padding%20oracle%20attack | Padding oracle attack | In cryptography, a padding oracle attack is an attack which uses the padding validation of a cryptographic message to decrypt the ciphertext. In cryptography, variable-length plaintext messages often have to be padded (expanded) to be compatible with the underlying cryptographic primitive. The attack relies on having a "padding oracle" who freely responds to queries about whether a message is correctly padded or not. Padding oracle attacks are mostly associated with CBC mode decryption used within block ciphers. Padding modes for asymmetric algorithms such as OAEP may also be vulnerable to padding oracle attacks.
Symmetric cryptography
In symmetric cryptography, the padding oracle attack can be applied to the CBC mode of operation, where the "oracle" (usually a server) leaks data about whether the padding of an encrypted message is correct or not. Such data can allow attackers to decrypt (and sometimes encrypt) messages through the oracle using the oracle's key, without knowing the encryption key.
Padding oracle attack on CBC encryption
The standard implementation of CBC decryption in block ciphers is to decrypt all ciphertext blocks, validate the padding, remove the PKCS7 padding, and return the message's plaintext.
If the server returns an "invalid padding" error instead of a generic "decryption failed" error, the attacker can use the server as a padding oracle to decrypt (and sometimes encrypt) messages.
The mathematical formula for CBC decryption is
As depicted above, CBC decryption XORs each plaintext block with the previous block.
As a result, a single-byte modification in block will make a corresponding change to a single byte in .
Suppose the attacker has two ciphertext blocks and they want to decrypt the second block to get plaintext .
The attacker changes the last byte of (creating ) and sends to the server.
The server then returns whether or not the padding of the last decrypted block () is correct (equal to 0x01).
If the padding is correct, the attacker now knows that the last byte of is . Therefore, the last byte of equals .
If the padding is incorrect, the attacker can change the last byte of to the next possible value.
At most, the attacker will need to make 256 attempts (one guess for every possible byte) to find the last byte of . If the decrypted block contains padding information or bytes used for padding then an additional attempt will need to be made to resolve this ambiguity.
After determining the last byte of , the attacker can use the same technique to obtain the second-to-last byte of .
The attacker sets the last byte of to by setting the last byte of to .
The attacker then uses the same approach described above, this time modifying the second-to-last byte until the padding is correct (0x02, 0x02).
If a block consists of 128 bits (AES, for example), which is 16 bytes, the attacker will obtain plaintext in no more than 255⋅16 = 4080 attempts. This is significantly faster than the attempts required to bruteforce a 128-bit key.
Encrypting messages with Padding oracle attack (CBC-R)
CBC-R turns a decryption oracle into an encryption oracle, and is primarily demonstrated against padding oracles.
Using padding oracle attack CBC-R can craft an initialization vector and ciphertext block for any plaintext:
decrypt any ciphertext Pi = PODecrypt( Ci ) XOR Ci−1,
select previous cipherblock Cx−1 freely,
produce valid ciphertext/plaintext pair Cx-1 = Px XOR PODecrypt( Ci ).
To generate a ciphertext that is N blocks long, attacker must perform N numbers of padding oracle attacks. These attacks are chained together so that proper plaintext is constructed in reverse order, from end of message (CN) to beginning message (C0, IV). In each step, padding oracle attack is used to construct the IV to the previous chosen ciphertext.
The CBC-R attack will not work against an encryption scheme that authenticates ciphertext (using a message authentication code or similar) before decrypting.
Attacks using padding oracles
The original attack was published in 2002 by Serge Vaudenay. Concrete instantiations of the attack were later realised against SSL and IPSec. It was also applied to several web frameworks, including JavaServer Faces, Ruby on Rails and ASP.NET as well as other software, such as the Steam gaming client. In 2012 it was shown to be effective against some hardened security devices.
While these earlier attacks were fixed by most TLS implementors following its public announcement, a new variant, the Lucky Thirteen attack, published in 2013, used a timing side-channel to re-open the vulnerability even in implementations that had previously been fixed. As of early 2014, the attack is no longer considered a threat in real-life operation, though it is still workable in theory (see signal-to-noise ratio) against a certain class of machines. , the most active area of development for attacks upon cryptographic protocols used to secure Internet traffic are downgrade attack, such as Logjam and Export RSA/FREAK attacks, which trick clients into using less-secure cryptographic operations provided for compatibility with legacy clients when more secure ones are available. An attack called POODLE (late 2014) combines both a downgrade attack (to SSL 3.0) with a padding oracle attack on the older, insecure protocol to enable compromise of the transmitted data. In May 2016 it has been revealed in that the fix against Lucky Thirteen in OpenSSL introduced another padding oracle.
References
Cryptographic attacks
Transport Layer Security
Computation oracles |
18701317 | https://en.wikipedia.org/wiki/SK1%20%28program%29 | SK1 (program) | sK1 is an open-source, cross-platform illustration program that can be used as a substitute for professional proprietary software like CorelDRAW or Adobe Illustrator. Unique project features are CorelDRAW formats importers, tabbed multiple document interface, Cairo-based engine, and color management.
History
A small team led by Igor Novikov started the project in 2003, based on the existing open source vector graphics editor Skencil. sK1 is a fork of the Skencil 0.6.x series which used Tk widgets for the user interface (this version had been dropped by the main Skencil developers who were working on a branch of the program based on GTK+). Although an attempt was made to unify the project with Skencil, it failed.
In 2007 the sK1 team reverse-engineered the CorelDRAW (CDR) format. The results and the first working snapshot of the CDR importer were presented at the Libre Graphics Meeting 2007 conference taking place in May 2007 in Montreal (Canada). Later on the team parsed the structure of other Corel formats with the help of CDR Explorer. As of 2008, the sK1 project claims to have the best import support for CorelDRAW file formats among open source software programs. Export into CDR and CMX file formats was presented at the Libre Graphics Meeting 2019 conference taking place in May 2019 in Saarbrücken (Germany).
Target audience
Since the project was started by a small team of Ukrainian professionals in prepress, it was unambiguously focused on full support for PostScript, PDF, CMYK color model and color management at the expense of developing some advanced functions for illustrators. Informally the project is positioned as a free open source alternative to the commercial CorelDRAW.
Functionality
Tools
Selection
Node edit
Magnifier glass
Drawing of joint lines (polylines)
Bézier curves drawing
Rectangle drawing
Ellipse drawing
Polygon drawing
Text editing
Gradient drawing/editing
Supported formats
Import
CorelDRAW v7-X4 (CDR/CDT/CCX/CDRX/CMX)
Adobe Illustrator up to version 9 (based on PostScript)
Postscript (PS) and Encapsulated Postscript (EPS)
Computer Graphics Metafile (CGM)
Windows Metafile (WMF)
XFIG
Scalable Vector Graphics (SVG)
Skencil/Sketch/sK1 (SK, SK1, SK2)
Acorn Draw (AFF)
PLT - HPGL cutting plotter files
CorelDRAW palettes (CPL and XML)
Adobe Swatch Exchange palettes (ASE)
Adobe Photoshop palettes (ACO)
Xara Designer palettes (JCW)
GIMP/Inkscape palettes (GPL)
LibreOffice palettes (SOC)
Scribus palettes (XML)
sK1 palettes (SKP)
Adobe Photoshop files (PSD)
GIMP files (XCF)
Images BMP, PNG, JPG, JPEG2000, TIFF, GIF, PCX, PPM, WEBP, XBM, XPM
Export
AI - Adobe Illustrator 5.0 (based on PostScript)
PDF - Portable Document Format
PS - PostScript
SVG - Scalable Vector Graphics
SK/SK1/SK2 - Skencil/Sketch/sK1
CGM - Computer Graphics Metafile
WMF - Windows Metafile
PLT - HPGL cutting plotter files
CorelDRAW palettes (CPL and XML)
Adobe Swatch Exchange palettes (ASE)
Adobe Photoshop palettes (ACO)
Xara Designer palettes (JCW)
GIMP/Inkscape palettes (GPL)
LibreOffice palettes (SOC)
Scribus palettes (XML)
sK1 palettes (SKP)
PNG - Portable Network Graphics
Side projects
UniConvertor
an application for conversion of files from one vector format into another one. In fact it is a part of sK1, rewritten as a standalone code and being developed by the same team. UniConvertor is also used by Inkscape for opening CorelDRAW, WMF and Sketch/Skencil files. Within a framework of Google Summer of Code 2008 the support of UniConvertor is being prepared for Scribus.
Color Palette Collection
a set of free palettes provided in different native file formats for sK1, Inkscape, GIMP, Scribus, LibreOffice, CorelDRAW, Adobe Illustrator, Xara Designer etc. For sK1 2.0 the palette collection is available as a web service.
CDR Explorer
a program that simplifies the reverse-engineering of CorelDRAW formats.
LinCutter
an application for interactive work with cutting machines (PLT format).
Awards
In 2007 the project was awarded the second place in the Trophées du Libre open source project contest in the "Multimedia and games" category.
In 2008 the project was awarded the third place in the contest Hackontest, organized by the Swiss Open Systems User Groupd /ch/open and sponsored by Google.
In 2009 the project was awarded the second place in the contest "The best free project of Russia", conducted by Linux Format magazine. Among the group projects.
In 2009 the UniConvertor project was awarded the first place in the Trophées du Libre open source project contest in the "Multimedia" category.
sK1 versions
UniConvertor versions
See also
Comparison of vector graphics editors
References
External links
Free vector graphics editors
Free diagramming software
Free software programmed in C
Free software programmed in Python
Software forks
Vector graphics editors for Linux
Vector graphics editors
Software that uses wxWidgets |
15145 | https://en.wikipedia.org/wiki/ISO%209660 | ISO 9660 | ISO 9660 (also known as ECMA-119) is a file system for optical disc media. Being sold by the International Organization for Standardization (ISO) the file system is considered an international technical standard. Since the specification is available for anybody to purchase, implementations have been written for many operating systems.
ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020.
The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry.
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (macOS-specific file characteristics such as resource forks, file backup date and more).
History
Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating their own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard.
In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Hard Rock Hotel and Casino) near Lake Tahoe, California. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format.
A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets.
In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660.
In 2013, ISO published Amendment 1 to the ISO 9660 standard, introducing new data structures and relaxed file name rules intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1.
In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard.
Specifications
The following is the rough overall structure of the ISO 9660 file system.
Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls "both-byte" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian.
Top level
The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content.
Volume descriptor set
The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks).
Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure:
The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt.
Standard volume descriptor types are the following:
An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks).
In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present. Supplementary volume descriptors describe the same volume as the primary volume descriptor does, and are normally used for providing additional code page support when the standard code tables are insufficient. The standard specifies that ISO 2022 is used for managing code sets that are wider than 8 bytes, and that ISO 2375 escape sequences are used to identify each particular code page used. Consequently, ISO 9660 supports international single-byte and multi-byte character sets, provided they fit into the framework of the referenced standards. However, ISO 9660 does not specify any code pages that are guaranteed to be supported: all use of code tables other than those defined in the standard itself are subject to agreement between the originator and the recipient of the volume. Enhanced volume descriptors were introduced in ISO 9660, Amendment 1. They relax some of the requirements of the other volume descriptors and the directory records referenced by them: for example, the directory depth can exceed eight, file identifiers need not contain '.' or file version number, the length of a file and directory identifier is maximized to 207.
Path tables
Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535.
Directories and files
Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time.
The standard specifies three nested levels of interchange (paraphrased from section 10):
Level 1: File names are limited to eight characters with a three-character extension. Directory names are limited to eight characters. Files may contain one single file section.
Level 2: Files may contain one single file section.
Level 3: No additional restrictions than those stipulated in the main body of the standard. That is, directory identifiers may not exceed 31 characters in length, and file name + '.' + file name extension may not exceed 30 characters in length (sections 7.5 and 7.6). Files are also allowed to consist of multiple non-contiguous sections (with some restrictions as to order).
Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1).
The standard also specifies the following name restrictions (sections 7.5 and 7.6):
All levels restrict file names in the mandatory file hierarchy to upper case letters, digits, underscores ("_"), and a dot. (See also section 7.4.4 and Annex A.)
If no characters are specified for the File Name then the File Name Extension shall consist of at least one character.
If no characters are specified for the File Name Extension then the File Name shall consist of at least one character.
File names shall not have more than one dot.
Directory names shall not use dots at all.
A CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard.
All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). It is possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Linux supports multiple extents.
Extensions and improvements
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (macOS-specific file characteristics such as resource forks, file backup date and more).
SUSP
System Use Sharing Protocol (SUSP, IEEE P1281) provides a generic way of including additional properties for any directory entry reachable from the primary volume descriptor (PVD). In an ISO 9660 volume, every directory entry has an optional system use area whose contents are undefined and left to be interpreted by the system. SUSP defines a method to subdivide that area into multiple system use fields, each identified by a two-character signature tag. The idea behind SUSP was that it would enable any number of independent extensions to ISO 9660 to be created and included on a volume without conflicting. It also allows for the inclusion of property data that would otherwise be too large to fit within the limits of the system use area.
SUSP defines several common tags and system use fields:
CE: Continuation area
PD: Padding field
SP: System use sharing protocol indicator
ST: System use sharing protocol terminator
ER: Extensions reference
ES: Extension selector
Other known SUSP fields include:
AA: Apple extension, preferred
BA: Apple extension, old (length attribute is missing)
AS: Amiga file properties
ZF: zisofs compressed file, usually produced by program mkzftree or by libisofs. Transparently decompressed by Linux kernel if built with CONFIG_ZISOFS.
AL: records Extended File Attributes, including ACLs. Proposed by libburnia, supported by libisofs.
The Apple extensions do not technically follow the SUSP standard; however the basic structure of the AA and AB fields defined by Apple are forward compatible with SUSP; so that, with care, a volume can use both Apple extensions as well as RRIP extensions.
Rock Ridge
The Rock Ridge Interchange Protocol (RRIP, IEEE P1282) is an extension which adds POSIX file system semantics. The availability of these extension properties allows for better integration with Unix and Unix-like operating systems. The standard takes its name from the fictional town Rock Ridge in Mel Brooks' film Blazing Saddles. The RRIP extensions are, briefly:
Longer file names (up to 255 bytes) and fewer restrictions on allowed characters (support for lowercase, etc.)
UNIX-style file modes, user ids and group ids, and file timestamps
Support for Symbolic links and device files
Deeper directory hierarchy (more than 8 levels)
Efficient storage of sparse files
The RRIP extensions are built upon SUSP, defining additional tags for support of POSIX semantics, along with the format and meaning of the corresponding system use fields:
RR: Rock Ridge extensions in-use indicator (note: dropped from standard after version 1.09)
PX: POSIX file attributes
PN: POSIX device numbers
SL: symbolic link
NM: alternate name
CL: child link
PL: parent link
RE: relocated directory
TF: time stamp
SF: sparse file data
Amiga Rock Ridge is similar to RRIP, except it provides additional properties used by AmigaOS. It too is built on the SUSP standard by defining an "AS"-tagged system use field. Thus both Amiga Rock Ridge and the POSIX RRIP may be used simultaneously on the same volume. Some of the specific properties supported by this extension are the additional Amiga-bits for files. There is support for attribute "P" that stands for "pure" bit (indicating re-entrant command) and attribute "S" for script bit (indicating batch file). This includes the protection flags plus an optional comment field. These extensions were introduced by Angela Schmidt with the help of Andrew Young, the primary author of the Rock Ridge Interchange Protocol and System Use Sharing Protocol. The first publicly available software to master a CD-ROM with Amiga extensions was MakeCD, an Amiga software which Angela Schmidt developed together with Patrick Ohly.
El Torito
El Torito is an extension designed to allow booting a computer from a CD-ROM. It was announced in November 1994 and first issued in January 1995 as a joint proposal by IBM and BIOS manufacturer Phoenix Technologies. According to legend, the El Torito CD/DVD extension to ISO 9660 got its name because its design originated in an El Torito restaurant in Irvine, California (). The initial two authors were Curtis Stevens, of Phoenix Technologies, and Stan Merkin, of IBM.
A 32-bit PC BIOS will search for boot code on an ISO 9660 CD-ROM. The standard allows for booting in two different modes. Either in hard disk emulation when the boot information can be accessed directly from the CD media, or in floppy emulation mode where the boot information is stored in an image file of a floppy disk, which is loaded from the CD and then behaves as a virtual floppy disk. This is useful for computers that were designed to boot only from a floppy drive. For modern computers the "no emulation" mode is generally the more reliable method. The BIOS will assign a BIOS drive number to the CD drive. The drive number (for INT 13H) assigned is any of 80hex (hard disk emulation), 00hex (floppy disk emulation) or an arbitrary number if the BIOS should not provide emulation. Emulation is useful for booting older operating systems from a CD, by making it appear to them as if they were booted from a hard or floppy disk.
El Torito can also be used to produce CDs which can boot up Linux operating systems, by including the GRUB bootloader on the CD and following the Multiboot Specification. While the El Torito spec alludes to a "Mac" platform ID, PowerPC-based Apple Macintosh computers don't use it.
Joliet
Joliet is an extension specified and endorsed by Microsoft and has been supported by all versions of its Windows operating system since Windows 95 and Windows NT 4.0. Its primary focus is the relaxation of the filename restrictions inherent with full ISO 9660 compliance. Joliet accomplishes this by supplying an additional set of filenames that are encoded in UCS-2BE (UTF-16BE in practice since Windows 2000). These filenames are stored in a special supplementary volume descriptor, that is safely ignored by ISO 9660-compliant software, thus preserving backward compatibility. The specification only allows filenames to be up to 64 Unicode characters in length. However, the documentation for mkisofs states filenames up to 103 characters in length do not appear to cause problems. Microsoft has documented it "can use up to 110 characters."
Joliet allows Unicode characters to be used for all text fields, which includes file names and the volume name. A "Secondary" volume descriptor with type 2 contains the same information as the Primary one (sector 16 offset 40 bytes), but in UCS-2BE in sector 17, offset 40 bytes. As a result of this, the volume name is limited to 16 characters.
Many current PC operating systems are able to read Joliet-formatted media, thus allowing exchange of files between those operating systems even if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), which was formerly not possible with plain ISO 9660-formatted media. Operating systems which can read Joliet media include:
Microsoft Windows; Microsoft recommends the use of the Joliet extension for developers targeting Windows.
Linux
macOS
FreeBSD
OpenSolaris
Haiku
AmigaOS
RISC OS
Romeo
Romeo was developed by Adaptec and allows the use of long filenames up to 128 characters. However, Romeo is not backwards compatible with ISO 9660 and discs authored using this file system can only be read under the Windows 9x and Windows NT platforms, thus not allowing exchange of files between those operating systems if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), for example ü becomes ³.
Apple extensions
Apple Computer authored a set of extensions that add ProDOS or HFS/HFS+ (the primary contemporary file system for Mac OS) properties to the filesystem. Some of the additional metadata properties include:
Date of last backup
File type
Creator code
Flags and data for display
Reference to a resource fork
In order to allow non-Macintosh systems to access Macintosh files on CD-ROMs, Apple chose to use an extension of the standard ISO 9660 format. Most of the data, other than the Apple specific metadata, remains visible to operating systems that are able to read ISO 9660.
Other extensions
For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. The TRANS.TBL file is a plain ASCII text file. Each line contains three fields, separated by an arbitrary amount of whitespace:
The file type ("F" for file or "D" for directory);
The ISO 9660 filename (including the usually hidden ";1" for files); and
The extended filename, which may contain spaces.
Most implementations that create TRANS.TBL files put a single space between the file type and ISO 9660 name and some arbitrary number of tabs between the ISO 9660 filename and the extended filename.
Native support for using TRANS.TBL still exists in many ISO 9660 implementations, particularly those related to Unix. However, it has long since been superseded by other extensions, and modern utilities that create ISO 9660 images either cannot create TRANS.TBL files at all, or no longer create them unless explicitly requested by the user. Since a TRANS.TBL file has no special identification other than its name, it can also be created separately and included in the directory before filesystem creation.
The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or "session" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions.
JIS X 0606:1998, also known as ISO 9660:1999, is a Japanese Industrial Standard draft created by the Japanese National Body (JTC1 N4222) in order to make some improvements and remove some limitations from the original ISO 9660 standard. This draft was submitted in 1998, but it has not been ratified as an ISO standard yet. Some of its changes includes the removal of some restrictions imposed by the original standard by extending the maximum file name length to 207 characters, removing the eight-level maximum directory nesting limit, and removing the special meaning of the dot character in filenames. Some operating systems allow these relaxations as well when reading optical discs. Several disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) support a so-called "ISO 9660:1999" mode (sometimes called "ISO 9660 v2" or "ISO 9660 Level 4" mode) that removes restrictions following the guidelines in the ISO 9660:1999 draft.
The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated.
Disc images
Optical disc images are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension .iso (.iso9660 is less common, but also in use) and are commonly referred to as "ISOs".
Platforms
Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc.
Operating systems that support ISO 9660 and its extensions include the following:
DOS: access with extensions, such as MSCDEX.EXE (Microsoft CDROM Extension), NWCDEX.EXE or CORELCDX.EXE
Microsoft Windows 95, Windows 98, Windows ME: can read ISO 9660 Level 1, 2, 3, and Joliet
Microsoft Windows NT 4.0, Windows 2000, Windows XP, and newer Windows versions, can read ISO 9660 Level 1, 2, 3, Joliet, and ISO 9660:1999. Windows 7 may also mistake UDF format for CDFS. for more information see UDF.
Linux and BSD: ISO 9660 Level 1, 2, 3, Joliet, Rock Ridge, and ISO 9660:1999
Apple GS/OS: ISO Level 1 and 2 support via the HS.FST File System Translator.
Classic Mac OS 7 to 9: ISO Level 1, 2. Optional free software supports Rock Ridge and Joliet (including ISO Level 3): Joke Ridge and Joliet Volume Access.
macOS (all versions): ISO Level 1, 2, Joliet and Rock Ridge Extensions. Level 3 is not currently supported, although users have been able to mount these discs
AmigaOS supports the "AS" extensions (which preserve the Amiga protection bits and file comments)
QNX
ULTRIX
OS/2, eComStation and ArcaOS
BeOS, Zeta and Haiku
OpenVMS supports only ISO 9660 Interchange levels 1–3, with no extensions
RISC OS support for optical media written on a PC is patchy. Most CD-Rs/RWs work perfectly, however DVD+-Rs/RWs/RAMs are entirely hit and miss running RISC OS 4.02, RISC OS 4.39 and RISC OS 6.20
See also
Comparison of disc image software
Disk image emulator
List of International Organization for Standardization standards
Hybrid CD
References
Further reading
External links
This is the ECMA release of the ISO 9660:1988 standard, available as a free download
ISOLINUX source code (see isolinux.asm line 294 onward)
(see int 13h in interrupt.b, esp. functions 4a to 4d)
, discusses shortcomings of the standard
US Patent 5758352 - Common name space for long and short filenames
Amiga APIs
Apple Inc. file systems
Compact disc
Disk file systems
Ecma standards
09660
Optical computer storage
Optical disc authoring
Windows disk file systems |
302027 | https://en.wikipedia.org/wiki/Dive%20computer | Dive computer | A dive computer, personal decompression computer or decompression meter is a device used by an underwater diver to measure the elapsed time and depth during a dive and use this data to calculate and display an ascent profile which according to the programmed decompression algorithm, will give a low risk of decompression sickness.
Most dive computers use real-time ambient pressure input to a decompression algorithm to indicate the remaining time to the no-stop limit, and after that has passed, the minimum decompression required to surface with an acceptable risk of decompression sickness. Several algorithms have been used, and various personal conservatism factors may be available. Some dive computers allow for gas switching during the dive. Audible alarms may be available to warn the diver when exceeding the no-stop limit, the maximum operating depth for the gas mixture, the recommended ascent rate or other limit beyond which risk increases significantly.
The display provides data to allow the diver to avoid decompression, or to decompress relatively safely, and includes depth and duration of the dive. Several additional functions and displays may be available for interest and convenience, such as water temperature and compass direction, and it may be possible to download the data from the dives to a personal computer via cable or wireless connection. Data recorded by a dive computer may be of great value to the investigators in a diving accident, and may allow the cause of an accident to be discovered.
Dive computers may be wrist-mounted or fitted to a console with the submersible pressure gauge. A dive computer is perceived by recreational scuba divers and service providers to be one of the most important items of safety equipment. Use by professional scuba divers is also common, but use by surface-supplied divers is less widespread, as the diver's depth is monitored at the surface by pneumofathometer and decompression is controlled by the diving supervisor.
Purpose
The primary purpose of a decompression computer is to facilitate safe decompression by an underwater diver breathing a suitable gas at ambient pressure, by providing information based on the recent pressure exposure history of the diver that allows an ascent with acceptably low risk of developing decompression sickness. Dive computers address the same problem as decompression tables, but are able to perform a continuous calculation of the partial pressure of inert gases in the body based on the actual depth and time profile of the diver. As the dive computer automatically measures depth and time, it is able to warn of excessive ascent rates and missed decompression stops and the diver has less reason to carry a separate dive watch and depth gauge. Many dive computers also provide additional information to the diver including air and water temperature, data used to help prevent oxygen toxicity, a computer-readable dive log, and the pressure of the remaining breathing gas in the diving cylinder. This recorded information can be used for the diver's personal log of their activities or as important information in medical review or legal cases following diving accidents.
Because of the computer's ability to continually re-calculate based on changing data, the diver benefits by being able to remain underwater for longer periods at acceptable risk. For example, a recreational diver who plans to stay within "no-decompression" limits can in many cases simply ascend a few feet each minute, while continuing the dive, and still remain within reasonably safe limits, rather than adhering to a pre-planned bottom time and ascending directly. So-called multi-level dives can be pre-planned with traditional dive tables or personal computer and smartphone apps, or on the fly using waterproof dive tables, but the additional calculations become complex and the plan may be cumbersome to follow, and the risk of errors rises with profile complexity. Computers allow for a certain amount of spontaneity during the dive, and automatically take into account deviations from the dive plan.
Dive computers are used to safely calculate decompression schedules in recreational, scientific, and military diving operations. There is no reason to assume that they cannot be valuable tools for commercial diving operations, especially on multi-level dives.
Components
Function
Dive computers are battery-powered computers within a watertight and pressure resistant case. These computers track the dive profile by measuring time and pressure. All dive computers measure the ambient pressure to model the concentration of gases in the tissues of the diver. More advanced dive computers provide additional measured data and user input into the calculations, for example, the water temperature, gas composition, altitude of the water surface, or the remaining pressure in the diving cylinder.
The computer uses the pressure and time input in a decompression algorithm to estimate the partial pressure of inert gases that have been dissolved in the diver's tissues. Based on these calculations, the computer estimates when a direct ascent is no longer possible, and what decompression stops would be needed based on the profile of the dive up to that time and recent hyperbaric exposures which may have left residual dissolved gases in the diver.
Many dive computers are able to produce a low risk decompression schedule for dives that take place at altitude, which requires longer decompression than for the same profile at sea level, because the computers measure the atmospheric pressure before the dive and take this into account in the algorithm. When divers travel before or after diving and particularly when they fly, they should transport their dive computer with them in the same pressure regime so that the computer can measure the pressure profile that their body has undergone.
Many computers have some way for the user to adjust decompression conservatism. This may be by way of a personal factor, which makes an undisclosed change to the algorithm decided by the manufacturer, or the setting of gradient factors, a way of reducing the permitted supersaturation of tissue compartments by specific ratios, which is well defined in the literature, leaving the responsibility for making informed decisions on personal safety to the diver.
Algorithms
The decompression algorithms used in dive computers vary between manufacturers and computer models. Examples of decompression algorithms are the Bühlmann algorithms and their variants, the Thalmann VVAL18 Exponential/Linear model, the Varying Permeability Model, and the Reduced Gradient Bubble Model. The propitiatory names for the algorithms do not always clearly describe the actual decompression model. The algorithm may be a variation of one of the standard algorithms, for example, several versions of the Bühlmann decompression algorithm are in use. The algorithm used may be an important consideration in the choice of a dive computer. Dive computers using the same internal electronics may be marketed under a variety of brand names.
The algorithm used is intended to inform the diver of a decompression profile that will keep the risk of decompression sickness (DCS) to an acceptable level. Researchers use experimental diving programmes or data that has been recorded from previous dives to validate an algorithm. The dive computer measures depth and time, then uses the algorithm to determine decompression requirements or estimate remaining no-stop times at the current depth. An algorithm takes into account the magnitude of pressure reduction, breathing gas changes, repetitive exposures, rate of ascent, and time at altitude. Algorithms are not able to reliably account for age, previous injury, ambient temperature, body type, alcohol consumption, dehydration, and other factors such as patent foramen ovale, because the effects of these factors have not been experimentally quantified, though some may attempt to compensate for these by factoring in user input, and for diver peripheral temperature and workload by having sensors that monitor ambient temperature and cylinder pressure changes as a proxy. Water temperature is known to be a poor proxy for body temperature, as it does not account for the effectiveness of the diving suit or heat generated by work or active heating systems.
, the newest dive computers on the market used:
Liquivision X1: V-Planner Live: VPM-B Varying Permeability Model and GAP for X1: Bühlmann GF (Buhlman with Gradient Factors)
Mares: Mares-Wienke Reduced Gradient Bubble Model
Pelagic Pressure Systems: modified Haldanean/DSAT Database or Bühlmann ZHL-16C(called Z+)
Seiko: Bühlmann ZHL-12 as modified by Randy Bohrer.
Suunto: Suunto-Wienke Reduced Gradient Bubble Model The Suunto folded RGBM is not a true RGBM algorithm, which would be computationally intensive, but a Haldanean model with additional bubble limitation factors.
Uwatec: Bühlmann ZHL-8 /ADT (Adaptive), MB (Micro Bubble), PMG (Predictive Multigas), Bühlmann ZHL-16 DD (Trimix)
Heinrichs Weikamp OSTC and DR5: Bühlmann ZHL-16 and Bühlmann ZHL-16 plus Erik Baker's gradient factors deep stop algorithm both for open circuit and fixed set point closed circuit rebreather.
:
Cochran EMC-20H: 20-tissue Haldanean model.
Cochran VVAL-18: nine-tissue Haldanean model with exponential ongasing and linear offgasing.
Delta P: 16-tissue Haldanean model with VGM (variable gradient model, i.e., the tolerated supersaturation levels change during the dive as a function of the profile, but no details are provided as to how this is done).
Mares: ten-tissue Haldanean model with RGBM; the RGBM part of the model adjusts gradient limits in multiple-dive scenarios through undisclosed "reduction factors".
Suunto: nine-tissue Haldanean model with RGBM; the RGBM part of the model adjusts gradient limits in multiple-dive scenarios through undisclosed "reduction factors".
Uwatec: ZHL-8 ADT (Adaptive), MB (Micro Bubble), PMG (Predictive Multigas), ZHL-16 DD (Trimix).
:
Aqualung: Pelagic Z+ – a proprietary algorithm based on Bühlmann ZHL-16C algorithm.
Cressi: Haldane and Wienke RGBM algorithm.
Garmin: Bühlmann ZHL-16C algorithm.
Oceanic: Dual Algorithm - Pelagic Z+ (ZHL-16C) and Pelagic DSAT.
ScubaPro: ZHL-8 ADT (Adaptive), MB (Micro Bubble), PMG (Predictive Multigas), ZHL-16 DD (Trimix).
Shearwater: Bühlmann ZHL-16C with user selectable gradient factors or optional VPM-B and VPM-B/GFS.
:
Aqualung: Pelagic Z+ – a proprietary algorithm developed by Dr. John E. Lewis, based on Bühlmann ZHL-16C algorithm. Conservatism may be adjusted by altitude setting, deep stops, and safety stops.
Atomic: "Recreational RGBM" based on the Wienke model, using user input of age, selected risk level, and exertion level to adjust conservatism.
Cressi: RGBM. User settings for conservatism and optional deep and safety stops.
Garmin: Bühlmann ZHL-16C, with a choice of three preset conservatism settings or customisable gradient factors, and customisable safety stops.
Mares: RGBM or Bühlmann ZHL-16C GF (Gradient Factor) depending on model. Preset and customisable conservatism settings.
Oceanic: User option of dual algorithms - Pelagic Z+ (ZHL-16C) and Pelagic DSAT.
Ratio: Buhlmann ZHL-16B and VPM-B, user settable Gradient Factors (GFL/GFH) for Buhlmann and user settable Bubble Radius for VPM.
ScubaPro: ZHL-16 ADT MB PMG. Predictive multi-gas modified algorithm, with various conservatism options with user inputs of experience level, age and physical condition, which are assumed to have some influence on gas elimination rate. Input from breathing rate, skin temperature and heart rate monitor is also available and can be used by the algorithm to estimate a workload condition, which is used to modify the algorithm.
Shearwater: Bühlmann ZHL-16C with optional VPM-B, VPM-B/GFS and DCIEM. The standard package is Buhlmann with user selectable gradient factors, and the option to enable VPM software which may be used in open-circuit tech and rebreather modes, or enable DCIEM which may be used in air and single-gas nitrox modes. VPM-B/GFS is a combination of the two models which applies the ceiling from the more conservative model for each stop. The current decompression ceiling may be displayed as an option and the algorithm will calculate decompression at any depth below the ceiling. The GFS option is a hybrid that automatically chooses the decompression ceiling from the more conservative of the VPM-B profile and a Buhlmann ZHL-16C profile. For the Buhlmann profile a single gradient factor is used, adjustable over a range of 70% (most conservative) to 99% (least conservative), the default is 90%. The DCIEM model differs from ZHL-16C and VPM which are parallel models and assume that all compartments are exposed to ambient partial pressures and no gas interchange occurs between compartments. A serial model assumes that the diffusion takes place through a series of compartments, and only one is exposed to the ambient partial pressures.
Suunto: RGBM based algorithm with conservatism settings, known to be a comparatively conservative algorithm. There are various versions used in different models. The technical computers use an algorithm that claims flexibility through the use of continuous decompression, which means the current ceiling is displayed instead of a stop depth.
RGBM
Technical RGBM
Fused RGBM: for deep diving, switches between "RGBM" and "Technical RGBM" for open circuit and rebreather dives to a maximum of 150 m
Fused RGBM 2
Bühlmann 16 GF (Gradient Factor) based on ZHL-16C
Display information
Dive computers provide a variety of visual dive information to the diver.
Most dive computers display the following information during the dive on a LCD or OLED display:
Current depth (derived from ambient pressure).
Maximum depth reached on the current dive.
No stop time, the time remaining at the current depth without the need for decompression stops on ascent.
Elapsed dive time of the current dive.
Many dive computers also display additional information:
Total ascent time, or time to surface (TTS) assuming immediate ascent at recommended rate, and decompression stops as indicated. When multiple gases are enabled in the computer, the time to surface may be predicted based on the optimum gas being selected, during ascent, but the actual time to surface will depend on the actual gas selected, and may be longer than the displayed value. This does not invalidate the decompression calculation, which accounts for the actual exposure and gas selected.
Required decompression stop depth and time, also assuming immediate ascent at recommended rate.
Ambient temperature.(actually temperature of the pressure transducer)
Current ascent rate. This may be displayed as an actual speed of ascent, or a relative rate compared to the recommended rate.
Dive profile (often not displayed during the dive, but transmitted to a personal computer).
Gas mixture in use, as selected by the user.
Oxygen partial pressure at current depth, based on selected gas mixture.
Cumulative oxygen toxicity exposure (CNS), computed from measured pressure and time and selected gas mixture.
Battery charge status or low battery warning.
Some computers are designed to display information from a diving cylinder pressure sensor, such as:
Gas pressure.
Estimated remaining air time (RAT) based on available gas, rate of gas consumption and ascent time.
Some computers can provide a real time display of the oxygen partial pressure in the rebreather. This requires an input from an oxygen cell. These computers will also calculate cumulative oxygen toxicity exposure based on measured partial pressure.
Some computers can display a graph of the current tissue saturation for several tissue compartments, according to the algorithm in use.
Some information, which has no practical use during a dive, is only shown at the surface to avoid an information overload of the diver during the dive:
"Time to Fly" display showing when the diver can safely board an airplane.
Desaturation time
A log of key information about previous dives – date, start time, maximum depth, duration, and possibly others.
Maximum non-decompression bottom times for subsequent dives based on the estimated residual concentration of the inert gases in the tissues.
Dive planning functions (no decompression time based on current tissue loads and user-selected depth and breathing gas).
Warnings and alarms may include:
Maximum operating depth exceeded
No decompression limit approaching
No decompression limit exceeded
Excessive ascent rate
Decompression ceiling violation
Omitted decompression
Low cylinder pressure (where applicable)
Oxygen partial pressure high or low
Maximum depth violation
Audible information
Many dive computers have warning buzzers that warn the diver of events such as:
Excessive ascent rates.
Missed decompression stops.
Maximum operation depth exceeded.
Oxygen toxicity limits exceeded.
Decompression ceiling violation, or stop depth violation
Some buzzers can be turned off to avoid the noise.
Data sampling, storage and upload
Data sampling rates generally range from once per second to once per 30 seconds, though there have been cases where a sampling rate as low as once in 180 seconds has been used. This rate may be user selectable. Depth resolution of the display generally ranges between 1m and 0.1m. The recording format for depth over the sampling interval could be maximum depth, depth at the sampling time, or the average depth over the interval. For a small interval these will not make a significant difference to the calculated decompression status of the diver, and are the values at the point where the computer is carried by the diver, which is usually a wrist or suspended on a console, and may vary in depth differently to the depth of the demand valve, which determines breathing gas pressure, which is the relevant pressure for decompression computation.
Temperature resolution for data records varies between 0.1 °C to 1 °C. Accuracy is generally not specified, and there is often a lag of minutes as the sensor temperature changes to follow the water temperature. Temperature is measured at the pressure sensor, and is needed primarily to provide correct pressure data, so it is not a high priority for decompression monitoring to give the precise ambient temperature in real time.
Data storage is limited by internal memory, and the amount of data generated depends on the sampling rate. Capacity may be specified in hours of run time, number of dives recorded, or both. Values of up to 100 hours were available by 2010. This may be influenced by sampling rate selected by the diver.
By 2010, most dive computers had the ability to upload the data to a PC or smartphone, by cable, infrared or Bluetooth wireless connection.
Special purpose dive computers
Some dive computers are able to calculate decompression schedules for breathing gases other than air, such as nitrox, pure oxygen, trimix or heliox. The more basic nitrox dive computers only support one or two gas mixes for each dive. Others support many different mixes. When multiple gases are supported, there may be an option to set those which will be carried on the dive as active, which sets the computer to calculate the decompression schedule and time to surface based on the assumption that the active gases will be used when they are optimal for decompression. Calculation of tissue gas loads will generally follow the gas actually selected by the diver, unless there is multiple cylinder pressure monitoring to enable automatic gas selection by the computer.
Most dive computers calculate decompression for open circuit scuba where the proportions of the breathing gases are constant for each mix: these are "constant fraction" dive computers. Other dive computers are designed to model the gases in closed circuit scuba (diving rebreathers), which maintain constant partial pressures of gases by varying the proportions of gases in the mixture: these are "constant partial pressure" dive computers. These may be switched over to constant fraction mode if the diver bails out to open circuit. There are also dive computers which monitor oxygen partial pressure in real time in combination with a user nominated diluent mixture to provide a real-time updated mix analysis which is then used in the decompression algorithm to provide decompression information.
Additional functionality and features
Some dive computers provide additional functionality, generally a subset of those listed below:
Breathing gas oxygen analyser
Electronic compass
Gas blending calculator
Global navigation satellite receiver (only works at the surface)
Light-meter
Lunar phase indicator (useful for estimating tidal conditions)
Magnetometer (for detecting ferrous metal)
Pitch and roll angle
Stopwatch
Time of day in a second time zone
Time to surface after another 5 minutes at current depth on current gas.
Gauge mode (overrides decompression monitoring, and just records and displays depth and time and leaves the diver to control decompression by following tables). Selecting gauge mode may reset the tissue saturation records to default, which invalidates any further decompression calculations until the diver has fully desaturated.
Air integration (AI)– Some dive computers are designed to measure, display, and monitor pressure in one or more diving cylinders. The computer is either connected to the first stage by a high pressure hose, or uses a pressure transducer on the regulator first stage to provide a wireless data signal indicating remaining cylinder pressure, The signals are encoded to eliminate the risk of one diver's computer picking up a signal from another diver's transducer, or interference from other sources. Some dive computers can receive a signal from more that one remote pressure transducer. The Ratio iX3M Tech and others can process and display pressures from up to 10 transmitters.
Workload modification of decompression algorithm based on gas consumption rate from integrated gas pressure monitor.
Heart rate monitor from remote transducer. This can also be used to modify the decompression algorithm to allow for an assumed workload.
Graphic display of calculated tissue compartment inert gas tensions during and after the dive.
Indication of computed decompression ceiling in addition to the more usual next stop depth. The effects on decompression risk of following the ceiling rather than remaining below the stop depth is not known, but stop depths are arbitrarily chosen for the calculation of decompression tables, and time spent at any depth below the indicated ceiling depth is processed by the same algorithm.
Display of supersaturation of limiting tissue as a percentage of M-value in the event of an immediate ascent. This is an indicator of decompression risk in the event of an emergency ascent.
Display of current supersaturation of limiting tissue as a percentage of M-value during ascent. This is an indication of decompression stress and risk in real time.
Multiple active gases for open circuit and closed circuit diluent.
Deactivation of gas options during dive in case of lost gas. This will trigger the computer to recalculate the estimated time to surface without the deactivated gases.
Definition of a new gas during the dive to allow calculations for decompression on gas supplied by another diver.
Battery charge status.
Alternative decompression algorithms.
Features and accessories:
Peizo-electric buttons (no moving parts)
User input by directional tapping
Rechargeable batteries.
Wireless charging.
Optional battery types. For example the Shearwater Perdix and Petrel 2 can use 1,5V alkaline cells or 3.6V lithium cells provided they have the same physical format (AA).
User changeable batteries.
Battery redundancy.
User selected display colours (useful for the colour-blind), and variable brightness.
Screen inversion for ambidextrous use of units with plug-in cable connections for oxygen monitors.
Mask or mouthpiece mounted head-up display. (NERD)
Wireless downloading of dive log data.
Firmware upgrades over the Internet via Bluetooth or USB cable from smart phone or personal computer.
Display prompts for changing settings.
Twin straps or bungee straps for improved security.
Strap extensions for wristwatch format computers to allow for fitting over the forearm on bulky diving suits.
Aftermarket straps, for improved security.
Screen protectors, in the form of a self-adhesive transparent plastic film or a rigid transparent plastic cover.
Software for downloading, display and analysis of logged data. Most downloadable dive computers have a proprietary application, and many can also interface with open source software such as Subsurface. Some can download via a smartphone to the cloud.
Safety and reliability
The ease of use of dive computers can allow divers to perform complex dives with little planning. Divers may rely on the computer instead of dive planning and monitoring. Dive computers are intended to reduce risk of decompression sickness, and allow easier monitoring of the dive profile. Where present, breathing gas integration allows easier monitoring of remaining gas supply, and warnings can alert the diver to some high risk situations, but the diver remains responsible for planning and safe execution of the dive plan. The computer cannot guarantee safety, and only monitors a fraction of the situation. The diver must remain aware of the rest by personal observation and attention to the ongoing situation. A dive computer can also fail during a dive, due to malfunction or misuse.
Failure modes and probability of failure
It is possible for a dive computer to malfunction during a dive. Manufacturers are not obliged to publish reliability statistics, and generally only include a warning in the user manual that they are used at the diver's own risk. Reliability has markedly improved over time, particularly for the hardware.
Hardware failures
Mechanical and electrical failures:
Leaks allowing ingress of water to the electronic components, may be caused by:
Cracked faceplate, which is more likely with hard, scratch-resistant glass and sapphire used on watch format units. They are strong, but brittle, and can shatter under impact with a sufficiently hard point contact.
Seal failures can occur at joints, probably most often at the battery closure, as it is usually the most often disturbed. Computers with user serviceable batteries often use a double O-ring barrel seal to provide a more reliable seal.
Button failures are one of the more frequent problems, some models are particularly susceptible. Occasionally the failure is in the form of leaks, but more often the switch fails open, which is sometimes a fatigue problem. Pressure sensitive switches with no moving parts are sometimes used to avoid this problem.
Circuitry failures, other than switch failures, often due to water or battery leaks causing internal corrosion.
Battery failure, such as running down unexpectedly, leaking, or failing to charge properly. Internal rechargeable batteries exchange a lower risk of water leaks for a higher risk of battery degradation over time.
Non-rechargeable lithium batteries can explode if incorrectly used in a dive computer with charging facilities.
Software failures
Inherent risk
The main problem in establishing decompression algorithms for both dive computers and production of decompression tables, is that the gas absorption and release under pressure in the human body is still not completely understood. Furthermore, the risk of decompression sickness also depends on the physiology, fitness, condition and health of the individual diver. The safety record of most dive computers indicates that when used according to the manufacturer's instructions, and within the recommended depth range, the risk of decompression sickness is low.
Personal settings to adjust conservatism of the algorithm are available for most dive compters. They may be input as undisclosed personal factors, as reductions to M-values by a fixed ratio, by gradient factor, or by selecting a bubble size limit in VPM and RGBM models. The personal settings for recreational computers tend to be additional to the conservatism factors programmed into the algorithm by the manufacturer. Technical diving computers tend to allow a wider range of choice at the user's discretion, and provide warnings that the diver should ensure that they understand what they are doing and the associated risk before adjusting from the moderately conservative factory settings.
Human error
Many dive computers have menus, various selectable options and various display modes, which are controlled by a small number of buttons. Control of the computer display differs between manufacturers and in some cases between models by the same manufacturer. The diver may need information not displayed on the default screen during a dive, and the button sequence to access the information may not be immediately obvious. If the diver becomes familiar with the control of the computer on dives where the information is not critical before relying on it for more challenging dives there is less risk of confusion which may lead to an accident.
Most dive computers are supplied with default factory settings for algorithm conservatism, and maximum oxygen partial pressure, which are acceptably safe in the opinion of the manufacturer's legal advisors. Some of these may be changed to user preferences, which will affect risk. The user manual will generally provide instructions for adjusting and resetting to factory default, with some information on how to choose appropriate user settings. Responsibility for appropriate use of user settings lies with the user who makes or authorises the settings. There is a risk of the user making inappropriate choices due to lack of understanding or input error.
Management and mitigation strategies
If the diver has been monitoring decompression status and is within the no-decompression limits, a computer failure can be acceptably managed by simply surfacing at the recommended ascent rate, and if possible, doing a short safety stop near the surface. If, however the computer could fail while the diver has a decompression obligation, or cannot make a direct ascent, some form of backup is prudent. The dive computer can be considered safety-critical equipment when there is a significant decompression obligation, as failure without some form of backup system can expose the diver to a risk of severe injury or death.
The diver may carry a backup dive computer. The probability of both failing at the same time is orders of magnitude lower. Use of a backup which is the same model as the primary simplifies use and reduces the probability of user error, particularly under stress, but makes the equipment redundancy less statistically independent. Statistics for failure rates of dive computers do not appear to be publicly available.
If diving to a well regulated buddy system where both divers follow closely matched dive profiles, the buddy's dive computer may be sufficient backup.
A dive profile can be planned before the dive, and followed closely to allow reversion to the planned schedule if the computer fails. This implies the availability of a backup timer and depth gauge, or the schedule will be useless. It also requires the diver to follow the planned profile conservatively.
Some organisations such as the American Academy of Underwater Sciences have recommended that a dive plan should be established before the dive and then followed throughout the dive unless the dive is aborted. This dive plan should be within the limits of the decompression tables to increase the margin of safety, and to provide a backup decompression schedule based on the dive tables in case the computer fails underwater. The disadvantage of this extremely conservative use of dive computers is that when used this way, the dive computer is merely used as a bottom timer, and the advantages of real time computation of decompression status – the original purpose of dive computers – are sacrificed. This recommendation is not in the 2018 version of the AAUS Standards for Scientific diving: Manual.
A diver wishing to further reduce the risk of decompression sickness can take additional precautionary measures, such as one or more of:
Use a dive computer with a relatively conservative decompression model
Induce additional conservatism in the algorithm by selecting a more conservative personal setting or using a higher altitude setting than the actual dive altitude indicates.
Add additional deep safety stops during a deep dive
Make a slow ascent
Add additional shallow safety stops, or stay longer at the stops than required by the computer
Have a long surface interval between dives
If using a backup computer, run one on a low conservatism setting as an indication of fastest acceptable risk ascent for an emergency, and the other at the diver's preferred conservatism for personally acceptable risk when there is no contingency and no rush to surface. The diver can always elect to do more decompression than indicated as necessary by the computer for a lower risk of decompression sickness without incurring a penalty for later dives. Some dive computers can be set to a different gradient factor during a dive, which has the same effect if the diver can remember under stress how to make the adjustment, and some computers can be set to display the maximum tissue supersaturation value for an immediate ascent.
Continue to breathe oxygen enriched gas after surfacing, either in the water while waiting for the boat, after exiting the water, or both.
Management of violations
Violations of the safety limits as indicated by the computer display may occur during a dive for various reasons, including user error and circumstances beyond the diver's control. How this is handled depends on the decompression model, how the algorithm implements the model, and how the manufacturer chooses to interpret and apply the violation criteria.
Many computers go into a "lockout mode" for 24 to 48 hours if the diver violates the computer's safety limits, to discourage continued diving after an unsafe dive. Once in lockout mode, these computers will not function until the lockout period has ended. This is a reasonable response if lockout is initiated after the dive, as the algorithm will have been used out of scope and the manufacturer will reasonably prefer to avoid further responsibility for its use until tissues can be considered desaturated. When lockout happens underwater it will leave the diver without any decompression information at the time when it is most needed. For example, the Apeks Quantum will stop displaying the depth if the 100 m depth limit is exceeded, but will lock out 5 minutes after surfacing for a missed decompression stop. The Scubapro/Uwatec Galileo technical trimix computer will switch to gauge mode at 155 m after a warning, after which the diver will get no decompression information. Other computers, for example Delta P's VR3, Cochran NAVY, and the Shearwater range will continue to function, providing 'best guess' functionality while warning the diver that a stop has been missed, or a ceiling violated.
Some dive computers are extremely sensitive to violations of indicated decompression stop depth. The HS Explorer is programmed to credit time spent even slightly (0.1 metre) above the indicated stop depth at only 1/60 of the nominal rate. There is no theoretical or experimental basis claimed as justification for this hard limit. Others, such as the Shearwater Perdix, will fully credit any decompression done below the calculated decompression ceiling, which may be displayed as a user selectable option, and is always equal to or shallower than the indicated stop depth. This strategy is supported by the mathematics of the model, but little experimental evidence is available on the practical consequences,so a warning is provided. A violation of the computed decompression ceiling elicits an alarm, which self cancels if the diver immediately descends below the ceiling. The Ratio iX3M will provide a warning if the indicated stop depth is violated by 0.1 m or more, but it is not clear how the algorithm is affected. In many cases the user manual does not provide information on how sensitive the algorithm is to precise depth, what penalties may be incurred by minor discrepancies, or what theoretical basis justifies the penalty. Over-reaction to stop depth violation puts the diver at an unnecessary disadvantage if there is an urgent need to surface.
More complex functionality is accompanied by more complex code, which is more likely to include undiscovered errors, particularly in non-critical functions, where testing may not be so rigorous. The trend is to be able to download firmware updates online to eliminate bugs as they are found and corrected. In earlier computers, some errors required factory recall.
Redundancy
A single computer shared between divers cannot accurately record the dive profile of the second diver, and therefore their decompression status will be unreliable and probably inaccurate. In the event of computer malfunction during a dive, the buddy's computer record may be the best available estimate of decompression status, and has been used as a guide for decompression in emergencies. Further diving after an ascent in these conditions exposes the diver to an unknown additional risk. Some divers carry a backup computer to allow for this possibility. The backup computer will carry the full recent pressure exposure history, and continued diving after a malfunction of one computer will not affect risk. It is also possible to set the conservatism on the backup computer to allow for the fastest acceptable ascent in case of an emergency, with the primary computer set for the diver's preferred risk level if this feature is not available on the computer. Under normal circumstances the primary computer will be used to control ascent rate.
History
In 1951 the Office of Naval Research funded a project with the Scripps Institution of Oceanography for the theoretical design of a prototype decompression computer. Two years later, two Scripps researchers, Groves and Monk, published a paper specifying the required functionalities for a decompression device to be carried by the diver. It must calculate decompression during a multilevel dive, it must take into account residual nitrogen loading from previous dives, and, based on this information, specify a safe ascent profile with better resolution than decompression tables. They suggested using an electrical analog computer to measure decompression and air consumption.
Pneumatic analogues
The prototype mechanical analogue Foxboro Decomputer Mark I, was produced by the Foxboro Company in 1955, and evaluated by the US Navy Experimental Diving Unit in 1957. The Mark 1 simulated two tissues using five calibrated porous ceramic flow resistors and five bellows actuators to drive a needle which indicated decompression risk during an ascent by moving towards a red zone on the display dial. The US Navy found the device to be too inconsistent.
The first recreational mechanical analogue dive computer, the "decompression meter" was designed by the Italians De Sanctis & Alinari in 1959 and built by their company named SOS, which also made depth gauges. The decompression meter was distributed directly by SOS and also by scuba diving equipment firms such as Scubapro and Cressi. It was very simple in principle: a waterproof bladder filled with gas inside the casing bled into a smaller chamber through a semi-porous ceramic flow resistor to simulate a single tissue in- and out-gassing). The chamber pressure was measured by a bourdon tube gauge, calibrated to indicate decompression status. The device functioned so poorly that it was eventually nicknamed "bendomatic".
In 1965, R. A. Stubbs and D. J. Kidd applied their decompression model to a pneumatic analogue decompression computer, and in 1967 Brian Hills reported development of a pneumatic analogue decompression computer modelling the thermodynamic decompression model. It modelled phase equilibration instead of the more commonly used limited supersaturation criteria and was intended as an instrument for on-site control of decompression of a diver based on real-time output from the device. Hills considered the model to be conservative.
Several mechanical analogue decompression meters were subsequently made, some with several bladders for simulating the effect on various body tissues, but they were sidelined with the arrival of electronic computers.
The Canadian DCIEM pneumatic analogue computer of 1962 simulated four tissues, approximating the DCIEM tables of the time.
The 1973 GE Decometer by General Electric used semi-permeable silicone membranes instead of ceramic flow resistors, which allowed deeper dives.
The Farallon Decomputer of 1975 by Farallon Industries, California simulated two tissues, but produced results very different from the US Navy tables of the time, and was withdrawn a year later.
Electrical analogues
At the same time as the mechanical simulators, electrical analog simulators were being developed, in which tissues were simulated by a network of resistors and capacitors, but these were found to be unstable with temperature fluctuations, and required calibration before use. They were also bulky and heavy because of the size of the batteries needed. The first analogue electronic decompression meter was the Tracor, completed in 1963 by Texas Research Associates.
Digital
The first digital dive computer was a laboratory model, the XDC-1, based on a desktop electronic calculator, converted to run a DCIEM four-tissue algorithm by Kidd and Stubbs in 1975. It used pneumofathometer depth input from surface-supplied divers.
From 1976 the diving equipment company Dacor developed and marketed a digital dive computer which used a table lookup based on stored US Navy tables rather than a real-time tissue gas saturation model. The Dacor Dive Computer (DDC), displayed output on light-emitting diodes for: current depth; elapsed dive time; surface interval; maximum depth of the dive; repetitive dive data; ascent rate, with a warning for exceeding 20 metres per minute; warning when no-decompression limit is reached; battery low warning light; and required decompression.
The Canadian company CTF Systems Inc. then developed the XDC-2 or CyberDiver II (1980), which also used table lookup, and the XDC-3, also known as CyberDiverIII, which used microprocessors, measured cylinder pressure using a high-pressure hose, calculated tissue loadings using the Kidd-Stubbs model, and remaining no-stop time. It had an LED matrix display, but was limited by the power supply, as the four 9 V batteries only lasted for 4 hours and it weighed 1.2 kg. About 700 of the XDC models were sold from 1979 to 1982.
In 1979 the XDC-4 could already be used with mixed gases and different decompression models using a multiprocessor system, but was too expensive to make an impact on the market.
In 1983, the Hans Hass-DecoBrain, designed by Divetronic AG, a Swiss start-up, became the first decompression diving computer, capable of displaying the information that today's diving computers do. The DecoBrain was based on Albert A. Bühlmann's 16 compartment (ZHL-12) tissue model which Jürg Hermann, an electronic engineer, implemented in 1981 on one of Intel's first single-chip microcontrollers as part of his thesis at the Swiss Federal Institute of Technology.
The 1984 Orca EDGE was an early example of a dive computer. Designed by Craig Barshinger, Karl Huggins and Paul Heinmiller, the EDGE did not display a decompression plan, but instead showed the ceiling or the so-called "safe-ascent-depth". A drawback was that if the diver was faced by a ceiling, he did not know how long he would have to decompress. The EDGE's large, unique display, however, featuring 12 tissue bars permitted an experienced user to make a reasonable estimate of his or her decompression obligation.
In the 1980s the technology quickly improved. In 1983 the Orca Edge became available as the first commercially viable dive computer. The model was based on the US Navy dive tables but did not calculate a decompression plan. However, production capacity was only one unit a day.
In 1984 the US Navy diving computer (UDC) which was based on a 9 tissue model of Edward D. Thalmann of the Naval Experimental Diving Unit (NEDU), Panama City, who developed the US Navy tables. Divetronic AG completed the UDC development – as it had been started by the chief engineer Kirk Jennings of the Naval Ocean System Center, Hawaii, and Thalmann of the NEDU – by adapting the Deco Brain for US Navy warfare use and for their 9-tissue MK-15 mixed gas model under an R&D contract of the US Navy.
Orca Industries continued to refine their technology with the release of the Skinny-dipper in 1987 to do calculations for repetitive diving. They later released the Delphi computer in 1989 that included calculations for diving at altitude as well as profile recording.
In 1986 the Finnish company, Suunto, released the SME-ML. This computer had a simple design, with all the information on display. It was easy to use and was able to store 10 hours of dives, which could be accessed any time. The SME-ML used a 9 compartment algorithm used for the US Navy tables, with tissues half times from 2.5 to 480 minutes. Battery life was up to 1500 hours, maximum depth 60 m.
In 1987 Swiss company UWATEC entered the market with the Aladin, which was a bulky and fairly rugged grey device with quite a small screen, a maximum depth of 100 metres, and an ascent rate of 10 metres per minute. It stored data for 5 dives and had a user replaceable 3.6 V battery, which lasted for around 800 dives. For some time it was the most commonly seen dive computer, particularly in Europe. Later versions had a battery which had to be changed by the manufacturer and an inaccurate battery charge indicator, but the brand remained popular.
The c1989 Dacor Microbrain Pro Plus claimed to have the first integrated dive planning function, the first EEPROM storing full dive data for the last three dives, basic data for 9999 dives, and recorded maximum depth achieved, cumulative total dive time, and total number of dives. The LCD display provides a graphic indication of remaining no-decompression time.
General acceptance
Even by 1989, the advent of dive computers had not met with what might be considered widespread acceptance. Combined with the general mistrust, at the time, of taking a piece of electronics that your life might depend upon underwater, there were also objections expressed ranging from dive resorts felt that the increased bottom time would upset their boat and meal schedules, to that experienced divers felt that the increased bottom time would, regardless of the claims, result in many more cases of decompression sickness. Understanding the need for clear communication and debate, Michael Lang of the California State University at San Diego and Bill Hamilton of Hamilton Research Ltd. brought together, under the auspices of the American Academy of Underwater Sciences a diverse group that included most of the dive computer designers and manufacturers, some of the best known hyperbaric medicine theorists and practitioners, representatives from the recreational diving agencies, the cave diving community and the scientific diving community.
The basic issue was made clear by Andrew A. Pilmanis in his introductory remarks: "It is apparent that dive computers are here to stay, but are still in the early stages of development. From this perspective, this workshop can begin the process of establishing standard evaluation procedures for assuring safe and effective utilization of dive computers in scientific diving."
After meeting for two days the conferees were still in, "the early stages of development," and the "process of establishing standard evaluation procedures for assuring safe and effective utilization of dive computers in scientific diving," had not really begun. University of Rhode Island diving safety officer Phillip Sharkey and ORCA EDGE's Director of Research and Development, prepared a 12-point proposal that they invited the diving safety officers in attendance to discuss at an evening closed meeting. Those attending included Jim Stewart (Scripps Institution of Oceanography), Lee Somers (University of Michigan), Mark Flahan (San Diego State University), Woody Southerland (Duke University), John Heine (Moss Landing Marine Laboratories), Glen Egstrom (University of California, Los Angeles), John Duffy (California Department of Fish and Game), and James Corry (United States Secret Service). Over the course of several hours the suggestion prepared by Sharkey and Heinmiller was edited and turned into the following 13 recommendations:
Only those makes and models of dive computers specifically approved by the Diving Control Board may be used.
Any diver desiring the approval to use a dive computer as a means of determining decompression status must apply to the Diving Control Board, complete an appropriate practical training session and pass a written examination.
Each diver relying on a dive computer to plan dives and indicate or determine decompression status must have his own unit.
On any given dive, both divers in the buddy pair must follow the most conservative dive computer.
If the dive computer fails at any time during the dive, the dive must be terminated and appropriate surfacing procedures should be initiated immediately.
A diver should not dive for 18 hours before activating a dive computer to use it to control his diving.
Once the dive computer is in use, it must not be switched off until it indicates complete outgassing has occurred or 18 hours have elapsed, whichever comes first.
When using a dive computer, non-emergency ascents are to be at the rate specified for the make and model of dive computer being used.
Ascent rates shall not exceed 40 fsw/min in the last 60 fsw.
Whenever practical, divers using a dive computer should make a stop between 10 and 30 feet for 5 minutes, especially for dives below 60 fsw.
Only 1 dive on the dive computer in which the NDL of the tables or dive computer has been exceeded may be made in any 18-hour period.
Repetitive and multi-level diving procedures should start the dive, or series of dives, at the maximum planned depth, followed by subsequent dives of shallower exposures.
Multiple deep dives require special consideration.
As recorded in "Session 9: General discussion and concluding remarks:"
Mike Lang next lead the group discussion to reach consensus on the guidelines for use of dive computers. These 13 points had been thoroughly discussed and compiled the night before, so that most of the additional comments were for clarification and precision. The following items are the guidelines for use of dive computers for the scientific diving community. It was again reinforced that almost all of these guidelines were also applicable to the diving community at large.
After the AAUS workshop most opposition to dive computers dissipated, numerous new models were introduced, the technology dramatically improved and dive computers soon became standard scuba diving equipment.
Further development
c1996, Mares marketed a dive computer with spoken audio output, produced by Benemec Oy of Finland.
c2000, HydroSpace Engineering developed the HS Explorer, a Trimix computer with optional PO2 monitoring and twin decompression algorithms, Buhlmann, and the first full RGBM implementation.
In 2001, the US Navy approved the use of Cochran NAVY decompression computer with the VVAL 18 Thalmann algorithm for Special Warfare operations.
In 2008, the Underwater Digital Interface (UDI) was released to the market. This dive computer, based on the RGBM model, includes a digital compass, an underwater communication system that enables divers to transmit preset text messages, and a distress signal with homing capabilities.
By 2010 the use of dive computers for decompression status tracking was virtually ubiquitous among recreational divers and widespread in scientific diving. 50 models by 14 manufacturers were available in the UK.
The variety and number of additional functions available has increased over the years.
Validation
Verification is the determination that a dive computer functions correctly, in that it correctly executes its programmed algorithm, and this would be a standard quality assurance procedure by the manufacturer, while validation confirms that the algorithm provides the accepted level of risk. The risk of the decompression algorithms programmed into dive computers may be assessed in several ways, including tests on human subjects, monitored pilot programs, comparison to dive profiles with known decompression sickness risk, and comparison to risk models.
Performance of dive computers exposed to profiles with known human subject results.
Studies (2004) at the University of Southern California's Catalina hyperbaric chamber ran dive computers against a group of dive profiles that have been tested with human subjects, or have a large number of operational dives on record.
The dive computers were immersed in water inside the chamber and the profiles were run. Remaining no-decompression times, or required total decompression times, were recorded from each computer 1 min prior to departure from each depth in the profile. The results for a 40 msw "low risk" multi-level no-decompression dive from the PADI/DSAT RDP test series provided a range of 26 min of no-decompression time remaining to 15 min of required decompression time for the computers tested. The computers which indicated required decompression may be regarded as conservative: following the decompression profile of a conservative algorithm or setting will expose the diver to a reduced risk of decompression, but the magnitude of the reduction is unknown. Conversely the more aggressive indications of the computers showing a considerable amount of remaining no-decompression time will expose the diver to a greater risk than the fairly conservative PADI/DSAT schedule, of unknown magnitude.
Comparative assessment and validation
Evaluation of decompression algorithms could be done without the need for tests on human subjects by establishing a set of previously tested dive profiles with a known risk of decompression sickness. This could provide a rudimentary baseline for dive computer comparisons. As of 2012, the accuracy of temperature and depth measurements from computers may lack consistency between models making this type of research difficult.
Accuracy of displayed data
European standard "EN13319:2000 Diving accessories - Depth gauges and combined depth and time measuring devices - Functional and safety requirements, test methods", specifies functional and safety requirements and accuracy standards for depth and time measurement in dive computers and other instruments measuring water depth by ambient pressure. It does not apply to any other data which may be displayed or used by the instrument.
Temperature data are used to correct pressure sensor output, which is non-linear with temperature, and are not as important as pressure for the decompression algorithm, so a lesser level of accuracy is required. A study published in 2021 examined the response time, accuracy and precision of water temperature measurement computers and found that 9 of 12 models were accurate within 0.5 °C given sufficient time for the temperature to stabilise, using downloaded data from open water and wet chamber dives in fresh- and seawater. High ambient air temperature is known to affect temperature profiles for several minutes into a dive, depending on the location of the pressure sensor, as the heat transfer from computer body to the water is slowed by factors such as poor thermal conductivity of a plastic housing, internal heat generation, and mounting the sensor orifice in contact with the insulation of the diving suit. An edge-mounted sensor in a small metal housing will follow ambient temperature changes much faster than a base mounted sensor in a large, thick-walled plastic housing, while both provide accurate pressure signals.
An earlier survey of 49 models of decompression computer published in 2012 showed a wide range of error in displayed depth and temperature. Temperature measurement is primarily used to ensure correct processing of the depth transducer signal, so measuring the temperature of the pressure transducer is appropriate, and the slow response to external ambient temperature is not relevant to this function, provided that the pressure signal is correctly processed.
Nearly all of the tested computers recorded depths greater than the actual pressure would indicate, and were markedly inaccurate (up to 5%) for some of the computers. There was considerable variability in permitted no-stop bottom times, but for square profile exposures, the computer-generated values tended to be more conservative than tables at depths shallower than 30 m, but less conservative at 30–50 m. The no-stop limits generated by the computers were compared to the no-stop limits of the DCIEM and RNPL tables. Variation from applied depth pressure measured in a decompression chamber, where accuracy of pressure measurement instrumentation is periodically calibrated to fairly high precision (±0.25%), showed errors from -0.5 to +2m, with a tendency to increase with depth.
There appeared to be a tendency for models of computer by the same manufacturer to display a similar variance in displayed pressure, which the researchers interpreted as suggesting that the offset could be a deliberate design criterion, but could also be an artifact of using similar components and software by the manufacturer. The importance of these errors for decompression purposes is unknown, as ambient pressure, which is measured directly, but not displayed, is used for decompression calculations. Depth is calculated as a function of pressure, and does not take into account density variations in the water column. Actual linear distance below the surface is more relevant for scientific measurement, while displayed depth is more relevant to forensic examinations of dive computers, and for divers using the computer in gauge mode with standard decompression tables, which are usually set up for pressure in feet or metres of water column .
Ergonomic considerations
If the diver cannot effectively use the dive computer during a dive it is of no value except as a dive profile recorder.
To effectively use the device the ergonomic aspects of the display and control input system (User interface) are important. Misunderstanding of the displayed data and inability to make necessary inputs can lead to life-threatening problems underwater. The operating manual is not available for reference during the dive, so either the diver must learn and practice the use of the specific unit before using it in complex situations, or the operation must be sufficiently intuitive that it can be worked out on the spot, by a diver who may be under stress at the time. Although several manufacturers claim that their units are simple and intuitive to operate, the number of functions, layout of the display, and sequence of button pressing is markedly different between different manufacturers, and even between different models by the same manufacturer. Number of buttons that may need to be pressed during a dive generally varies between two and four, and the layout and sequence of pressing buttons can become complicated. Experience using one model may be of little use preparing the diver to use a different model, and a significant relearning stage may be necessary. Both technical and ergonomic aspects of the dive computer are important for diver safety. Underwater legibility of the display may vary significantly with underwater conditions and the visual acuity of the individual diver. If labels identifying output data and menu choices are not legible at the time they are needed, they do not help. Legibility is strongly influenced by text size, font, brightness, and contrast. Colour can help in recognition of meaning, such as distinguishing between normal and abnormal conditions, but may detract from legibility, particularly for the colour-blind, and a blinking display demands attention to a warning or alarm, but is distracting from other information.
Several criteria have been identified as important ergonomic considerations:
Ease of reading critical data, including:
No decompression time remaining
Current depth
Elapsed time since the beginning of the dive (run time)
If decompression is required, total time to surface, and depth of the next required decompression stop
If gas integration is the only way to monitor the remaining gas supply, the remaining gas pressure.
Ease of reading and accessibility of the primary screen display. Misinterpretation of the display data can be very dangerous. This can occur for various reasons, including lack of identifying information and poor legibility. Ease of returning to the primary screen from alternative display options is also important. If the diver cannot remember how to get back to the screen which displays safety-critical information, their safety may be severely compromised. Divers may not fully understand and remember the operating instructions, as they tend to be complicated. Under stress complicated procedures are more likely to be forgotten or misapplied. Alternative screens may revert to the primary screen automatically after a time sufficient to read the auxiliary information. Critical information may be displayed on all stable screen options during a dive as a compromise. It is preferable for the data to be visible by default, and not require illumination by a dive light or internal lighting that needs a button pressed to light up. Some manufacturers offer similar functionality in optional compact and larger screen formats.
Ease of use and understanding of the user manual.
Ease of reading and clarity of meaning of warnings. These may be provided by simple symbol displays, by audible alarms, flashing displays, colour coding or combinations of these. Alarms should clearly indicate the problem, so the diver need not waste time trying to work out what is at fault, and can take immediate action to correct the problem.
For more technical applications, ease of making gas switches to both pre-set gas mixes carried by the diver, and non-preset mixes, which might be supplied by another diver.
Ease of accessing alternative screen data, much of which is not directly important for safety, but may affect the success of the dive in other ways, like use of compass features.
Legibility of the display under various ambient conditions of visibility and lighting, and for varying visual acuity of the diver, which may include fogging of the mask or even loss of the mask.
Manufacturing and performance standards
Standards relevant in the European Union:
When a dive computer is integrated with a cylinder pressure gauge it has to be certified according to EN250 (respiratory equipment) and the PPE Directive becomes mandatory.
The EMC directive (89/336/EEC) for electrical appliances, requires that they do not cause electrical interference, and are not susceptible to it.
EN13319:2000: covers equipment for measuring depth and time, but explicitly excludes monitoring of decompression obligation.
PPE Directive 89/686/EEC is intended to harmonize products to provide a high level of protection and safety, but dive computers are not listed in the directive under section 3.11 - additional requirements specific to particular risks – safety devices for diving equipment. Several other classes of diving equipment such as respiratory equipment (EN250:2002), buoyancy compensators (EN1809:1999), combined buoyancy and rescue devices (EN12628:2001), respiratory equipment for compressed nitrox and oxygen (EN13949:2004), rebreathers (EN14143:2004), and dry suits (EN14225-2:2005) fall under the PPE directive.)
The general quality assurance standard ISO9001
Operational considerations for use in commercial diving operations
Their acceptance of dive computers for use in commercial diving varies between countries and industrial sectors. Validation criteria have been a major obstacle to acceptance of diving computers for commercial diving. Millions of recreational and scientific dives each year are successful and without incident, but the use of dive computers remains prohibited for commercial diving operations in several jurisdictions because the algorithms used cannot be guaranteed safe to use, and the legislative bodies who can authorise their use have a duty of care to workers. Manufacturers do not want to invest in the expensive and tedious process of official validation, while regulatory bodies will not accept dive computers until a validation process has been documented.
Verification is the determination that a dive computer functions correctly, in that it correctly executes its programmed algorithm, while validation confirms that the algorithm provides the accepted level of risk.
If the decompression algorithm used in a series of dive computers is considered to be acceptable
for commercial diving operations, with or without additional usage guidelines, then there are
operational issues that need to be considered:
The computer must be simple to operate or it will probably not be accepted.
The display must be easily read in low visibility conditions to be effectively used.
The display must be clear and easily understood, even if the diver is influenced by nitrogen narcosis, to reduce the risk of confusion and poor decisions.
The decompression algorithm should be adjustable to more conservative settings, as some divers may want a more conservative profile.
The dive computer must be easy to download to collect profile data so that analysis of dives can be done.
Bottom timer
A bottom timer is an electronic device that records the depth at specific time intervals during a dive, and displays current depth, maximum depth, elapsed time and may also display water temperature and average depth. It does not calculate decompression data at all, and is equivalent to gauge mode on many dive computers.
Manufacturers
Benemec Oy, marketed by A.P.Valves (Buddy) and Mares
Citizen Watch
Cochran Undersea Technology (Cochran)
Deepblu
Delta P Technology (VR2)
Divesoft (Liberty)
Garmin
HeinrichsWeikamp (Open source)
HTM Sports: Dacor and Mares
HydroSpace Engineering(HSE)
Liquivision
Pelagic Pressure Systems, Acquired by Aqua Lung in May 2015, marketed as: Aeris,Beuchat, Genesis, Hollis, Oceanic, , Seemann, and Sherwood.
Ratio Computers
Scubapro-UWATEC owned by Johnson Outdoors
Seiko, also marketed by:
Apeks, Cressi, Dive Rite, Scubapro, Tusa, Zeagle
Shearwater Research (Shearwater)
Suunto
Technical Dive Computers
Uemis
Underwater Technology Center
VR Technology (VR3)
Value
Along with delayed surface marker buoys, dive computers stood out in a 2018 survey of European recreational divers and diving service providers as highly important safety equipment.
See also
References
Further reading
External links
Underwater diving safety equipment |
41502434 | https://en.wikipedia.org/wiki/Ultimaker | Ultimaker | Ultimaker is a 3D printer-manufacturing company based in the Netherlands, with offices and assembly line in the US. They make fused filament fabrication 3D printers, develop 3D printing software, and sell branded 3D printing materials. Their product line includes the Ultimaker S5 and S3, Ultimaker 3 series, Ultimaker 2+ series and Ultimaker Original+. These products are used by industries such as automotive, architecture, healthcare, education, and small scale manufacturing.
History
Ultimaker BV is a Dutch 3D printer company that was founded in 2011 by Martijn Elserman, Erik de Bruijn, and Siert Wijnia. Ultimaker started selling their products in May 2011. The company's foundation was laid at ProtoSpace Utrecht where Wijnia organized two workshops to build the RepRap Darwin 3D printer. Two Beta-workshops were organized at ProtoSpace Utrecht starting in September and December 2010, each consisting of 10 Monday evenings. Erik de Bruijn and Martijn Elserman assisted at those workshops. Frustration from their inability to get the Darwin design to work led to the inspiration to create their own design. Instead of sticking to the RepRap principle that their printer should be able to print its own parts, they designed their printer to be built mostly of laser cut plywood parts, that could be produced orders of magnitude faster than printed parts at the time. Their first prototypes bore the name "Ultimaker protobox" but newer prototypes were just titled "Ultimaker". In March 2011, Ultimaker ltd. released their first complete product, the "Ultimaker" (renamed in 2013 to "Ultimaker Original") under a Creative Commons BY-NC license. The Ultimaker Original was distributed as a Do It Yourself kit that hobbyists and technicians assembled themselves. It could print objects up to 210 mm x 210 mm x 205 mm at a maximum resolution of 20 microns.
Company milestones
2013
The Ultimaker 2 is released. The target markets are home-users, schools, and libraries, small businesses, and industrial designers who use 3D printing for rapid prototyping and production.
2015
Ultimaker's revenue doubles, with 35% of new customers coming from the North American market.
2017
Ultimaker's U.S. presence grows to include a network of 37 re-sellers.
2018
Ultimaker partners with material manufacturers DSM, BASF, DuPont Transportation & Advanced Polymers, Owens Corning, Mitsubishi, Henkel, Kuraray, Solvay and Clariant to create material profiles for printing high-level engineering plastics and composites.
Ultimaker opens a facility in Singapore to service Asia, Pacific and China markets and expands its manufacturing presence to three continents.
2018
The Ultimaker S5 is released. This is the company's first "large format" 3D printer, and it is also the first Ultimaker that can print with composite materials, such as Glass and Carbon Fiber Filled Nylons straight from the factory with no modifications needed.
2019
Arkema joins material alliance program and releases FluorX filament.
The company moves its headquarters to Utrecht, The Netherlands.
The Ultimaker S3 is released. The S3 is a smaller version of the S5 and is practically very similar to the Ultimaker 3, though with an LED touchscreen identical to that on the S5 and a hinged glass door. The S3 also includes presets for composite materials, and a re-engineered feeder wheel to accommodate them.
2020
The Ultimaker 2+ Connect is released. The printer is an updated version of the Ultimaker 2+, featuring a TFT touchscreen in place of the older LCD display and rotary control wheel, the SD Card slot has been replaced with a USB slot, the feeder wheel has been upgraded and the build plate has been improved.
Software
Their first software ran under a modified version of Replicator-G. They changed this later to Cura because more and more users started using this software in favor of Replicator-G, which was originally produced with Makerbot in mind. When the lead developer for Cura started working for Ultimaker, Ultimaker Cura became the lead software product for Ultimaker. Cura rapidly became a favorite of 3D printing enthusiasts. A YouMagine Survey found that 58% of users surveyed used Cura, compared to 23% that used Slic3r. On September 26, 2017 the company announced that Cura had achieved one million users. This announcement was made at the TCT show. With the release of Cura 4.0, Ultimaker users were able to back up their files to the cloud. As of 2020 the software was processing 1.4 million jobs per week.
Printers
Ultimaker Original
The Ultimaker Original is the predecessor of the Ultimaker 2 and was released only a few months after the company was founded. The Ultimaker Original is sold as a kit containing laser-cut wood and the techinical components. The printer must be assembled by the user and is thus able to be tailored to the user's preference and modified to their will. In 2012, the Ultimaker Original was awarded Fastest and Most Accurate 3D printer available by MAKE Magazine.
Ultimaker Original+
The Ultimaker Original+ is the main successor to the Ultimaker Original. It has an upgraded 24V power supply and heated build plate, however it is not compatible with dual extrusion due to the limitations of the power supply.
Ultimaker 2
The Ultimaker 2 is Ultimaker's first out-of-the-box 3D printer. After transportation, the user must calibrate the build plate and insert filament before printing. The Ultimaker 2 was released in 2013 and laid the foundations for a further 2 printers to be added to the family before it was upgraded in 2015. Like the rest of the family, it uses an SD card to print and an LCD screen and rotary wheel to navigate through its menus. The Ultimaker 2 is also single extrusion only. The Ultimaker 2 and its upgraded version, the Ultimaker 2+, have won countless awards and are widely regarded as some of the best commercial 3D printers to date.
Ultimaker 2 Go
The Ultimaker 2 Go is a compact and portable version of the Ultimaker 2. The printer has an exceptionally small build volume of just 120x120x115mm, allowing it to be moved from place to place in the special backpack provided. The Ultimaker 2 Go's smaller size does come at a cost however, as the build plate is not heated and it is thus highly recommended to apply masking tape to the build plate before printing.
Ultimaker 2 Extended
The Ultimaker 2 Extended is technically and physically identical to the Ultimaker 2, except for its 100mm higher build volume. It and the Ultimaker 2 Go were released simultaneously in April 2015.
Ultimaker 2+
The Ultimaker 2+ is the upgraded successor to the Ultimaker 2. It features an improved feeder wheel and tensioning system, interchangeable nozzles and a redesigned nozzle heating system and fan.
Ultimaker 2 Extended+
The Ultimaker 2 Extended+ is a taller version of the Ultimaker 2+ and an upgraded version of the Ultimaker 2 Extended. Again, its print volume is 100mm higher but it is otherwise technically indifferent to its normal-sized version.
Ultimaker 3
The Ultimaker 3 is the successor to the successful Ultimaker 2+ family. It features dual extrusion, compatibility with various other Ultimaker materials including PVA, PC, ABS, Nylon and Breakaway. It was released in October 2016. The LCD control screen is recoloured from blue to white and the navigation of the menus has been updated. In addition to this, when an Ultimaker material is plaved on the spool holder, the Ultimaker 3 will automatically detect the material and its colour through NFC, along with an estimate of its remaining length. In 2019, The Mediahq recognized the Ultimaker 3 as the Best 3D Printer of 2019 for Enthusiasts.
Ultimaker 3 Extended
The Ultimaker 3 Extended is a stretched version of the Ultimaker 3. Like the Ultimaker 2 Extended and 2 Extended+, the build volume is 100mm higher than on the Ultimaker 3.
Ultimaker S5
The Ultimaker S5 is the first member of Ultimaker's "S-line" primter family. It has the biggest build volume of an Ultimaker printer to date by all dimensions and the build volume is consistent with both nozzles and dual extrusion. The Ultimaker S5 has a 4.7" colour touchscreen replacing the older LCD screen and rotary wheel, a feeder system that pauses when material runs out and is compatible with glass and carbon fibre composites, among many other materials, and a pair of hinged glass doors. Like the Ultimaker 3, the S5 prints from a USB drive, LAN or Wi-Fi. However, unlike the Ultimaker 3, the S5 was developed for the professional market. It is certified by Materialise for FDA-approved medical applications.
Ultimaker S3
In September 2019, the S3 was introduced as a smaller alternative to the S5. Like the S5, it was developed for the professional market. The S3 occupies a smaller footprint than the S5 and offers a smaller build volume. The dual extruders print using almost any 2.85 filament, including abrasive filaments.
Add-ons
Ultimaker materials
In addition to making 3D printers, Ultimaker also manufacturers materials for their printers. These include:
Polypropylene (PP)
Polyvinyl acetate (PVA)
Acrylonitrile butadiene styrene (ABS)
Polylactic acid (PLA)
Tough PLA,
Copolyester (CPE)
Nylon
Polycarbonate (PC)
Thermoplastic polyurethane (TPU 95A)
Breakaway
A breakaway material was developed and released in 2017 to support multi-extrusion printing and reduce post-printing processing time.
Dual Extrusion Pack
As the Ultimaker Original had to be assembled by the user, it was extensively modified and tinkered with. Many people added a second nozzle to the printer, allowing for dual extrusion. For a brief period of time, Ultimaker themselves sold a Dual Extrusion Pack, allowing users to have dual extrusion without having to experiment extensively.
Ultimaker S5 Pro bundle
The S5 Pro bundle was announced at the TCT Show in September 2019. The S5 Pro bundle is an upgrade of the Ultimaker S5. It includes the S5 air manager to provide a closed environment for printing to keep ultra-fine particles out of the air while printing and the S5 material station that can hold up to 6 spools of filament for continuous 24/7 printing and to keep fragile materials such as PVA in an ideal temperature- and humidity-controlled environment. The company developed the setup as a bridge between industrial 3D printers and desktop printers.
Specifications
References
2011 establishments in the Netherlands
3D printer companies
3D printing
Companies based in Utrecht (province)
Companies established in 2011
Dutch brands
Free software
Fused filament fabrication
Organisations based in Utrecht (city) |
51467390 | https://en.wikipedia.org/wiki/List%20of%20computer%20museums | List of computer museums | Below is a list of computer museums around the world, organized by continent and country, then alphabetically by location.
Asia
South Korea
Nexon Computer Museum
Australia
The Australian Computer Museum Society, Inc, NSW - very large collection
The Nostalgia Box, Perth - Video Game Museum
Powerhouse Museum - Has Computer Exhibit
Monash Museum of Computing History, Monash University
Europe
Belgium
Computermuseum NAM-IP, Namur
Unisys Computermuseum, Haren (Brussels)
Croatia
Peek&Poke, Rijeka
Czech Republic
Apple Museum, Prague
Finland
Rupriikki Media Museum, Tampere
Finnish Museum of Games, Tampere
France
ACONIT, Grenoble
, Paris
FEB, Angers
Germany
Computerspielemuseum Berlin, Berlin - Video Game Museum
BINARIUM, Dortmund - Video Game and Personal Computer Museum
Heinz Nixdorf Museums Forum, Paderborn
Computermuseum der Fakultat Informatik, University of Stuttgart
Oldenburger Computer-Museum, Oldenburg
Computeum, Vilshofen, with a selection from the Munich Computer Warehouse, Private Collection
Deutsches Museum, Munich - Large computer collection in their Communications exhibit
technikum29 living museum, Frankfurt - Re-opened in January 2020.
Computerarchiv Muenchen, Munich - Computer, Video Games and Magazine Archive
Computermuseum der Fachhochschule Kiel, Kiel
:de:Analog Computer Museum, Bad Schwalbach / Hettenhain - Large collection of analog computers, working and under restoration.
Greece
Hellenic IT Museum
Ireland
Computer and Communications Museum of Ireland, National University of Ireland
Israel
The Israeli Personal Computer Museum, Haifa
Italy
Museo dell'Informatica Funzionante, Palazzolo Acreide (Siracusa)
Museo del Computer, via per Occhieppo, 29, 13891 Camburzano (Biella)
Museo Interattivo di Archeologia Informatica, Cosenza
UNESCO Computer Museum, Padova
All About Apple Museum, Savona
VIGAMUS, Rome - Video Game Museum
Tecnologic@mente, Ivrea
Museo degli strumenti per il calcolo, Pisa
The Netherlands
Bonami SpelComputer Museum, Zwolle
Computer Museum Universiteit van Amsterdam, Amsterdam
Computermuseum Hack42, Arnhem
HomeComputerMuseum, Helmond
Rotterdams Radio Museum, Rotterdam
Poland
Muzeum Historii Komputerów i Informatyki, Katowice
Muzeum Gry i Komputery Minionej Ery (Muzeum Gier), Wrocław
Apple Muzeum Polska, Piaseczno
Portugal
LOAD ZX Spectrum Museum, Cantanhede
Museu Faraday, IST - Instituto Superior Técnico, Lisboa
Nostalgica - Museu de Videojogos e tecnologia, Lisboa
Museu dos Computadores Inforap, Braga
Museu Virtual da Informática, Universidade do Minho, Braga
Museu das Comunicações, Lisboa
Museu Nacional de História Natural e da Ciência Universidade de Lisboa, Lisboa
Russia
Museum of Soviet Arcade Machines, Moscow
Yandex Museum, Moscow]
Yandex Museum, Saint-Petersburg
Moscow Apple Museum
Antimuseum of Computers and Games, Yekaterinburg
Slovenia
Computer Museum Društvo Računalniški Muzej, Ljubljana
Slovakia
Computer Museum SAV, Bratislava
Spain
Computer Museum Garcia Santesmases (MIGS), Complutense University
Museum of Informatics, Polytechnic University of Valencia
Museo de la Historia de la Computacion, Cáceres
Switzerland
Musée Bolo, Lausanne
Enter-Museum, Solothurn
Ukraine
Software & Computer Museum, Kyiv, Kharkiv
United Kingdom
Northwest Computer Museum, Leigh, Wigan
The National Museum of Computing, Bletchley Park
The Centre for Computing History, Cambridge
Retro Computer Museum, Leicester
Science Museum, London, London
National Archive for the History of Computing, University of Manchester
National Videogame Arcade, Nottingham
The Computing Futures Museum, Staffordshire University - In association with the BCS
Museum of Computing, Swindon
Time Line Computer Archive, Wigton
The Micro Museum, Ramsgate
Home Computer Museum, Hull
IBM Hursley Museum, Hursley
Derby Computer Museum
See also: Computer Conservation Society
North America
Canada
Personal Computer Museum, Brantford
iMusée, Montreal
York University Computer Museum or YUCoM, York University
University of Saskatchewan Computer Museum
United States
AZ
Southwest Museum of Engineering, Communications and Computation, Glendale, Arizona
CA
Computer History Museum, Mountain View, California
DigiBarn Computer Museum, Boulder Creek, California
Museum of Art and Digital Entertainment, Oakland, California
The Tech Museum of Innovation, San Jose, California
Intel Museum, Santa Clara, California
DC
Smithsonian National Museum of American History, Washington, D.C.
GA
Computer Museum of America, Roswell, Georgia
Museum of Technology at Middle Georgia State University, Macon, Georgia
KS
The Topeka Computing Museum, Topeka, Kansas - Now being liquidated, online archive only.
MD
System Source Computer Museum, Hunt Valley, Maryland
MN
Charles Babbage Institute, University of Minnesota
MT
American Computer & Robotics Museum, Bozeman, Montana
NJ
Vintage Computer Federation museum, Wall, New Jersey
NY
The Strong, International Center for the History of Electronic Games, Rochester, NY - Focus on Retrogaming but many games are on vintage personal computers.
PA
Kennett Classic Computer Museum, Kennett Square, Pennsylvania
Large Scale Systems Museum, Pittsburgh, Pennsylvania
Pennsylvania Computer Museum, Parkesburg, Pennsylvania
RI
Rhode Island Computer Museum, North Kingstown, Rhode Island
TX
Brazos Valley Computer Museum, Bryan, Texas
Museum of Computer Culture, Austin, Texas
National Videogame Museum, Frisco, Texas
VA
U.Va. Computer Museum, University of Virginia
WA
Living Computers: Museum + Labs, Seattle, Washington
Microsoft Visitor Center, Redmond, Washington
South America
Argentina
Museo de Informática UNPA-UARG, Río Gallegos
Museo de Informática de la República Argentina - Fundación ICATEC, Ciudad Autónoma de Buenos Aires
Espacio TEC, Bahia Blanca
Brasil
Museu Capixaba do Computador, Vitória/ES
Online
MV Museu de Tecnologia (Brazil)
Old Computer Museum
San Diego Computer Museum - Physical objects were donated to the San Diego State University Library, but still does online exhibits
Obsolete Computer Museum
Old-Computers.com
HP Computer Museum
Early Office Museum
IBM Archives
EveryMac.com
Bitsavers.org - Software and Document Archive
TAM (The Apple Museum) - Apple Computers and Products
Rewind Museum - Virtual museum with traveling physical exhibits
Virtual Museum of Computing
The Computer Collector
New Computer Museum
IPSJ Computer Museum - Computers of Japan
Freeman PC Museum
FEMICOM Museum - Femininity in 20th century Video games, computers and electronic toys
Home Computer Museum
Malware Museum - Malware programs from the 80's and 90's that have been stripped of their destructive properties.
History Computers
KASS Computer Museum - A computer history museum & private collection
Russian Virtual Computer Museum - a history of Soviet Computers from the late 1940s
Soviet Digital Electronics Museum - a museum of Soviet electronic calculators, PCs and some other devices
Development of Computer Science and Technologies in Ukraine - Ukrainian virtual Computer Museum
Spectrum Generation collection, supporting the LOAD ZX Spectrum Museum in Portugal
Home Computer Museum UK
See also
Computer museum
List of video game museums
References
History of computing
Lists of museums by subject
Video game museums |
13891636 | https://en.wikipedia.org/wiki/Biscom | Biscom | Biscom, Inc. is a privately held enterprise software company with headquarters in Westford, MA. Biscom’s primary focus is to provide secure document delivery solutions to regulated industries. The company develops and markets fax server solutions that facilitate inbound and outbound electronic fax communications, as well as managed file transfer, enterprise file synchronization and sharing, and document conversion solutions. Biscom is known for its ability to scale to deliver millions of documents per day and for a history of reliability that is required for mission critical processes.
Biscom was founded in 1986, by S.K. Ho, currently Biscom's Chairman. S.K. Ho was formerly the Director of Engineering at Wang Laboratories, where he designed and developed the Wang Word Processor and Wang Professional Image Systems; he holds nine major patents. Mr. Ho earned a BSME from Ordinance Engineering College in Taiwan and an MSEE from Drexel University.
Biscom Launches Fax Server Industry
Recognizing the opportunity to combine facsimile communications with computer applications, S.K. Ho left Wang Laboratories to found Biscom, Inc., and thus he launched the fax server industry. Early application of Biscom’s fax server, the FAXCOM Server, applied print output from mainframe applications to an electronic form, and merged this into a single TIFF document which could be delivered electronically via fax. This was a vast improvement over earlier processes in which the mainframe data would be printed to paper forms, and then mailed or sent via paper fax machine.
Fax Server Industry Evolves
Fax servers have continually evolved since 1986, supporting desktop faxing via email, via Web browser, via mobile applications, and via Application Programming Interfaces (APIs); integrating with directory services such as Microsoft’s Active Directory (and formerly Novell’s eDirectory); integrating with Voice over IP (VoIP) to support T.38 Fax over IP (FoIP); supporting paper-based faxing via multi function printers; enabling fax workflows with support for Optical Character Recognition (OCR) and barcode interpretation for rules based fax routing; and offering fax server capabilities in both premises and hosted solutions.
Biscom Expands Product Lines
Biscom has evolved into a provider of multiple product lines, including Managed File Transfer that enable secure and auditable delivery of files of all types and sizes, and document conversion that convert to and from popular formats such as PCL (Printer Command Language), PostScript, PDF (Portable Document Format), and Microsoft Office, and Enterprise File Synchronization & Sharing that enable full IT control of data and user manageability. Biscom currently offers secure document delivery solutions that support multiple modes of transmission, document types, and workflow automation.
See also
Fax server
Enterprise File Synchronization and Sharing (EFSS)
References
Notes
Network World: File Transfer Solutions Take Pressure Off e-mail
Windows IT Pro: Microsoft TechEd 2007
Ferris Research: Assured Delivery for emails and Files
Linux Journal: Product of the Day: FAXCOM Server on Linux
Educause Quarterly: Managing Large Volumes of Assignments
Network Computing: Fax Servers – Fast, Efficient, and Very Much Alive
ENT - Through the Test Track - Fax Server Testing Data - Technology Information
Windows NT: The Fax Stops Here
Linux Fax Software
Technobabble - Fax Servers: Taming the Beast Once and For All
InformIt: Fax Servers - Serving Faxes More than Ever
Owen, Jeff, Telecom Reseller Reporter, Biscom: FoIP, September 10, 2012 Fax/FoIP
Novell: FAXCOM - All for One and One for All
Companies established in 1986
Software companies based in Massachusetts
Fax
Managed file transfer |
58839507 | https://en.wikipedia.org/wiki/Smita%20Thackeray | Smita Thackeray | Smita Thackeray is an Indian social activist and film producer. She is the chairperson and founder of Rahul Productions and Mukkti Foundation. She has worked in the field of Women's Safety, HIV/AIDS awareness and education. She first ventured into Motion pictures with 1999 Indian Hindi-language comedy film Haseena Maan Jaayegi which was released in June 1999, grossing 27 crores worldwide, after which she has gone on to work in Hindi and Marathi Film and Television Industry.
Early life
Smita Thackeray was born on 17 August 1958 to a Middle-class Maharashtrian family in Mumbai. Her parents are Madhukar Chitre and Kunda Chitre. She attended Chhabildas Girl's High School, Dadar. As a child she was trained in Marathi Classical singing. She completed her Bachelor's in Science (BSc), with Honors from Ruparel College, Mumbai, with Botany as a major.
Career
She started out at the Centuar Hotel on a small stipend, where she handled administrative and managerial tasks. Her interest in Fashion translated to NARI boutique. She went on to be the Founder and Chairperson of Mukkti Foundation in 1997. She is the owner of Rahul Productions. She was formerly the President of the Indian Motion Picture Producers Association from 2002 to 2004. During her time in IMPPA, she started a conversation around ethical screening of Films and Content on television and other media post release.
Philanthropy through Mukkti Foundation
Smita Thackeray's aim in setting up Mukkti foundation was to raise awareness about HIV and AIDS and to eradicate the menace of Drug Abuse among the youth. Between July 1999 to May 2000 Mukkti raised funds for various causes through Celebrity cricket and football matches, Rs 5 lakhs was contributed for the Gujarat cyclone relief fund (1998), Rs 50 Lakhs was contributed for bereaved Indian soldiers of kargil war, Rs 41 Lakhs was contributed for the drought victims of Rajasthan and Gujrat. Between 1998 and 2008, Mukkti Foundation hosted its Annual AIDS Show to mark "World AIDS Day", where Film Stars and other celebrities joined hands to spread the message of AIDS Awareness.
The effort to destigmatize HIV affected individuals led to a 13 episode Television chat show hosted by Sonu Nigam in November 2000, and in 2003, three public service announcements were produced starring star celebrities, Amitabh Bachchan, Waheeda Rehman and Akshay Kumar One lakh pledges were collected to support the cause of an AIDS free life in 2009 under the campaign "I Pledge" with John Abraham. In December 2018, Smita Thackeray, along with stars Sunny Leone, Nisha Harale, Rohit Verma headed a 'Freedom Parade' in solidarity with the LGBTQ community who suffer the most stigma around HIV/AIDS.
In 2014, Me Mukti Marshals, trained individuals to support Police and RPF were deployed in Mumbai trains at night to protect women in Mumbai travelling in local trains.
Presidency of IMPPA
Smita Thackeray in 2001 was elected as the first female president of IMPPA. Video piracy was a major issue that was killing revenue for producers, she facilitated an MOU between Film Producers and Cable TV associations in December 2001 saving the filmmakers Rs 1 crore daily an amount lost to producers due to illegal telecasting For the first time in the history of IMPPA a fundraiser was hosted by the association called Ehsaas 2002 to raise funds for medical and education centres for spot boys and light men. In 2004, Indian Producers were invited to Switzerland by The Swiss Consulate to Promote Film Making, as a sign of welcome for the Indian Cinema by the Swiss President, Mr Joseph Deiss.
Awards and recognitions
Indian Motion Picture Producers Association presented an Appreciation award at the 69th Annual General Meeting on 16 September 2008.
LR Active Oil presented Women's Prerna Award for immense contribution in Social Service and Politics to India Society in 2013.
HEX World presented the News Makers Achievement 2010 for social contribution
Personal life
Smita Thackeray is the daughter of Madhukar Chitre and Kunda Chitre. She has two sisters Swati and Sushma. She grew up in a middle class family in suburban Mumbai. She married Jaidev Thackeray, son of Bal Thackeray in 1986 and they got divorced in 2004, she continued to stay in her In-laws home 'Matoshree'. She has two sons Rahul Thackeray her eldest and Aaishvary Thackeray her youngest. Rahul and Aishvary, graduated from American School of Bombay. Rahul went on graduate from Toronto Film School and is currently a Marathi and Hindi film writer/director and filmmaker.
Filmography
Films produced under Rahul Productions
Hindi motion pictures
Haseena Maan Jaayegi (1999)
Sandwich (2006)
Kaisay Kahein (2007)
Society Kaam Se Gayi (2011)
Hum Jo Keh Na Paaye (2005)
Hindi television shows
Red Ribbon Show (1999) Star TV
Khel (2000) Sony TV
Kabhi Khushi Kabhi Dhoom (2004) Star Plus
Marathi television shows
Jhep (ETV Marathi)
Bhagyavidhatha (ETV Marathi)
Vahini Saheb (Zee Marathi)
Ya Sukhano Ya (Zee Marathi)
Kulaswamini (Star Prawah)
Khel Mandla (Me Marathi)
Paarijaat (Saam TV)
Done Kinare Doghe Apan (Star Prawah)
Marathi motion pictures
R.A.A.D.A. Rocks (2011)
Films distributed by Magic Cloud
Ugly (2013)
The Shaukeens (2014)
Fugly (2014)
Hate Story 2 (2014)
Ragini MMS 2 (2014)
Roy (2015)
All is Well (2015)
Baby (2015)
Begum Jaan (2017)
Dhyanimani (2017)
References
External links
Mukkti Foundation
Film producers from Mumbai
Indian film distributors
Living people
Hindi film producers
Marathi film producers
Year of birth missing (living people) |
47288266 | https://en.wikipedia.org/wiki/DevConf.cz | DevConf.cz | DevConf.cz (Developer Conference) is an annual, free, Red Hat sponsored community conference for developers, admins, DevOps engineers, testers, documentation writers and other contributors to open source technologies. The conference includes topics on Linux, Middleware, Virtualization, Storage and Cloud. At DevConf.cz, FLOSS communities sync, share, and hack on upstream projects together in the city of Brno, Czech Republic.
DevConf.cz is held annually, usually during the last weekend of January (one week before FOSDEM), at the Brno University of Technology Faculty of Information Technology campus.
The topics of the conference in 2020 were: Agile, DevOps & CI/CD, Cloud and Containers, Community, Debug / Tracing, Desktop, Developer Tools, Documentation, Fedora, Frontend / UI / UX, Kernel, Immutable OS, IoT (Internet of Things), Microservices, Middleware, ML / AI / Big Data, Networking, Platform / OS, Quality / Testing, Security / IdM, Storage / Ceph / Gluster and Virtualization.
Conference history
The Developer Conference started in 2009 and followed the FUDCon, the Fedora User and Developer conference.
2009
September 10–11 at Faculty of Informatics Masaryk University - focused on Linux developers, advanced users and developers of JBoss
33 talks and workshops
JBoss topics like Jopr, jboss.org, Drools, Jbpm to Fedora topics like KDE and core utils
The keynote speakers were Radovan Musil and Radek Vokal
2011
February 11–12 at Faculty of Informatics Masaryk University
Two parallel tracks
around 200 attendees
2012
February 17–18 at Faculty of Informatics Masaryk University
60 talks (95% in English)
more than 600 attendees
GTK+ hackfest and GNOME Docs Sprint
2013
February 23–24 at Faculty of Informatics Masaryk University
60 talks, 18 lightning talks, 20 workshops
around 700 attendees
2014
February 7–9 at Faculty of Informatics Masaryk University
6 parallel tracks (3 talk tracks and 3 workshop tracks)
more than 1000 attendees
The keynote speaker was Tim Burke from Red Hat
2015
February 6–8 at Faculty of Faculty of Information Technology Brno University of Technology
154 workshops and talks
more than 1000 attendees
8 parallel tracks (5 talk tracks and 3 workshop tracks)
Winners of the Winter of Code competition were announced
The keynote speakers were Tim Burke, a vice president of Red Hat engineering, and Mark Little, a vice president of Red Hat engineering and CTO of JBoss Middleware
2016
February 5–7 at Faculty of Information Technology Brno University of Technology
203 workshops and talks
1600 attendees
8 parallel tracks (5 talk tracks and 3 workshop tracks)
Keynote speakers: Tim Burke, Jan Wildeboer, Denis Dumas and Matthew Miller
2017
January 27-29 at Faculty of Information Technology Brno University of Technology
220 talks, workshops, keynotes across 20 tracks, 30 lightning talks
3-5 sessions for Storage, Cloud, Networking, .net and Desktop
6-10 sessions for Microservices, OpenStack, Testing, DevTools, Virtualization, DevOps and Agile
11-15 sessions for Config Management, Linux and OpenShift
JUDCon had 18 sessions
Security, Fedora and Containers each had around 20 sessions
1600 attendees
13 community project booths, 4 community meetups
2018
January 26-28 at Faculty of Information Technology Brno University of Technology
3 keynotes, 215 talks and discussions and 26 workshops across 20 tracks
15 community project booths, 6 community meetups
Keynote speakers: Chris Wright, Hugh Brock, Michael McGrath, Jim Perrin, Matthew Miller
1600 attendees
2019
January 25-27 at Faculty of Information Technology Brno University of Technology
273 talks and workshops
18 meetups and 6 activities
1500 attendees
2020
January 24-26 at Faculty of Information Technology Brno University of Technology
3 keynotes, 210 talks, 11 discussions, 23 workshops across 21 tracks
20 community project booths, 12 community meetups and 5 fun activities
Keynote speakers: Leslie Hawthorn, William Benton & Christoph Goern, Karanbir Singh & Jeremy Eder
1600 attendees
Financing
Entrance and participation in the event is entirely free. It is financed by variety of teams of Red Hat. The event is mainly organized and run by volunteers.
See also
List of free-software events
FOSDEM
References
External links
Schedule of DevConf.cz 2015
DevConf.cz 2018 schedule
Devconf.cz 2019 schedule
Devconf.cz 2020 schedule
Linux conferences
Free-software conferences
Recurring events established in 2009 |
99326 | https://en.wikipedia.org/wiki/Richard%20Hamming | Richard Hamming | Richard Wesley Hamming (February 11, 1915 – January 7, 1998) was an American mathematician whose work had many implications for computer engineering and telecommunications. His contributions include the Hamming code (which makes use of a Hamming matrix), the Hamming window, Hamming numbers, sphere-packing (or Hamming bound), and the Hamming distance.
Born in Chicago, Hamming attended University of Chicago, University of Nebraska and the University of Illinois at Urbana–Champaign, where he wrote his doctoral thesis in mathematics under the supervision of Waldemar Trjitzinsky (1901–1973). In April 1945 he joined the Manhattan Project at the Los Alamos Laboratory, where he programmed the IBM calculating machines that computed the solution to equations provided by the project's physicists. He left to join the Bell Telephone Laboratories in 1946. Over the next fifteen years he was involved in nearly all of the Laboratories' most prominent achievements. For his work he received the Turing Award in 1968, being its third recipient.
After retiring from the Bell Labs in 1976, Hamming took a position at the Naval Postgraduate School in Monterey, California, where he worked as an adjunct professor and senior lecturer in computer science, and devoted himself to teaching and writing books. He delivered his last lecture in December 1997, just a few weeks before he died from a heart attack on January 7, 1998.
Early life
Richard Wesley Hamming was born in Chicago, Illinois, on February 11, 1915, the son of Richard J. Hamming, a credit manager, and Mabel G. Redfield. He grew up in Chicago, where he attended Crane Technical High School and Crane Junior College.
Hamming initially wanted to study engineering, but money was scarce during the Great Depression, and the only scholarship offer he received came from the University of Chicago, which had no engineering school. Instead, he became a science student, majoring in mathematics, and received his Bachelor of Science degree in 1937. He later considered this a fortunate turn of events. "As an engineer," he said, "I would have been the guy going down manholes instead of having the excitement of frontier research work."
He went on to earn a Master of Arts degree from the University of Nebraska in 1939, and then entered the University of Illinois at Urbana–Champaign, where he wrote his doctoral thesis on Some Problems in the Boundary Value Theory of Linear Differential Equations under the supervision of Waldemar Trjitzinsky. His thesis was an extension of Trjitzinsky's work in that area. He looked at Green's function and further developed Jacob Tamarkin's methods for obtaining characteristic solutions. While he was a graduate student, he discovered and read George Boole's The Laws of Thought.
The University of Illinois at Urbana–Champaign awarded Hamming his Doctor of Philosophy in 1942, and he became an instructor in mathematics there. He married Wanda Little, a fellow student, on September 5, 1942, immediately after she was awarded her own Master of Arts in English literature. They would remain married until his death, and had no children. In 1944, he became an assistant professor at the J.B. Speed Scientific School at the University of Louisville in Louisville, Kentucky.
Manhattan Project
With World War II still ongoing, Hamming left Louisville in April 1945 to work on the Manhattan Project at the Los Alamos Laboratory, in Hans Bethe's division, programming the IBM calculating machines that computed the solution to equations provided by the project's physicists. His wife Wanda soon followed, taking a job at Los Alamos as a human computer, working for Bethe and Edward Teller. Hamming later recalled that:
Hamming remained at Los Alamos until 1946, when he accepted a post at the Bell Telephone Laboratories (BTL). For the trip to New Jersey, he bought Klaus Fuchs's old car. When he later sold it just weeks before Fuchs was unmasked as a spy, the FBI regarded the timing as suspicious enough to interrogate Hamming. Although Hamming described his role at Los Alamos as being that of a "computer janitor", he saw computer simulations of experiments that would have been impossible to perform in a laboratory. "And when I had time to think about it," he later recalled, "I realized that it meant that science was going to be changed".
Bell Laboratories
At the Bell Labs Hamming shared an office for a time with Claude Shannon. The Mathematical Research Department also included John Tukey and Los Alamos veterans Donald Ling and Brockway McMillan. Shannon, Ling, McMillan and Hamming came to call themselves the Young Turks. "We were first-class troublemakers," Hamming later recalled. "We did unconventional things in unconventional ways and still got valuable results. Thus management had to tolerate us and let us alone a lot of the time."
Although Hamming had been hired to work on elasticity theory, he still spent much of his time with the calculating machines. Before he went home on one Friday in 1947, he set the machines to perform a long and complex series of calculations over the weekend, only to find when he arrived on Monday morning that an error had occurred early in the process and the calculation had errored off. Digital machines manipulated information as sequences of zeroes and ones, units of information that Tukey would christen "bits". If a single bit in a sequence was wrong, then the whole sequence would be. To detect this, a parity bit was used to verify the correctness of each sequence. "If the computer can tell when an error has occurred," Hamming reasoned, "surely there is a way of telling where the error is so that the computer can correct the error itself."
Hamming set himself the task of solving this problem, which he realised would have an enormous range of applications. Each bit can only be a zero or a one, so if you know which bit is wrong, then it can be corrected. In a landmark paper published in 1950, he introduced a concept of the number of positions in which two code words differ, and therefore how many changes are required to transform one code word into another, which is today known as the Hamming distance. Hamming thereby created a family of mathematical error-correcting codes, which are called Hamming codes. This not only solved an important problem in telecommunications and computer science, it opened up a whole new field of study.
The Hamming bound, also known as the sphere-packing or volume bound is a limit on the parameters of an arbitrary block code. It is from an interpretation in terms of sphere packing in the Hamming distance into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code which attains the Hamming bound is said to be a perfect code. Hamming codes are perfect codes.
Returning to differential equations, Hamming studied means of numerically integrating them. A popular approach at the time was Milne's Method, attributed to Arthur Milne. This had the drawback of being unstable, so that under certain conditions the result could be swamped by roundoff noise. Hamming developed an improved version, the Hamming predictor-corrector. This was in use for many years, but has since been superseded by the Adams method. He did extensive research into digital filters, devising a new filter, the Hamming window, and eventually writing an entire book on the subject, Digital Filters (1977).
During the 1950s, he programmed one of the earliest computers, the IBM 650, and with Ruth A. Weiss developed the L2 programming language, one of the earliest computer languages, in 1956. It was widely used within the Bell Labs, and also by external users, who knew it as Bell 2. It was superseded by Fortran when the Bell Labs' IBM 650 were replaced by the IBM 704 in 1957.
In A Discipline of Programming (1967), Edsger Dijkstra attributed to Hamming the problem of efficiently finding regular numbers. The problem became known as "Hamming's problem", and the regular numbers are often referred to as Hamming numbers in Computer Science, although he did not discover them.
Throughout his time at Bell Labs, Hamming avoided management responsibilities. He was promoted to management positions several times, but always managed to make these only temporary. "I knew in a sense that by avoiding management," he later recalled, "I was not doing my duty by the organization. That is one of my biggest failures."
Later life
Hamming served as president of the Association for Computing Machinery from 1958 to 1960. In 1960, he predicted that one day half of the Bell Lab's budget would be spent on computing. None of his colleagues thought that it would ever be so high, but his forecast actually proved to be too low. His philosophy on scientific computing appeared as the motto of his Numerical Methods for Scientists and Engineers (1962):
In later life, Hamming became interested in teaching. Between 1960 and 1976, when he left the Bell labs, he held visiting or adjunct professorships at Stanford University, Stevens Institute of Technology, the City College of New York, the University of California at Irvine and Princeton University. As a Young Turk, Hamming had resented older scientists who had used up space and resources that would have been put to much better use by the young Turks. Looking at a commemorative poster of the Bell Labs' valued achievements, he noted that he had worked on or been associated with nearly all of those listed in the first half of his career at Bell Labs, but none in the second. He therefore resolved to retire in 1976, after thirty years.
In 1976 he moved to the Naval Postgraduate School in Monterey, California, where he worked as an Adjunct Professor and senior lecturer in computer science. He gave up research, and concentrated on teaching and writing books. He noted that:
Hamming attempted to rectify the situation with a new text, Methods of Mathematics Applied to Calculus, Probability, and Statistics (1985). In 1993, he remarked that "when I left BTL, I knew that that was the end of my scientific career. When I retire from here, in another sense, it's really the end." And so it proved. He became Professor Emeritus in June 1997, and delivered his last lecture in December 1997, just a few weeks before his death from a heart attack on January 7, 1998. He was survived by his wife Wanda.
Appearances
Hamming takes part in the 1962 TV series The Computer and the Mind of Man
Awards and professional recognition
Turing Award, Association for Computing Machinery, 1968.
Member of the National Academy of Engineering, 1980.
Harold Pender Award, University of Pennsylvania, 1981.
IEEE Richard W. Hamming Medal, 1988.
Fellow of the Association for Computing Machinery, 1994.
Basic Research Award, Eduard Rhein Foundation, 1996.
The IEEE Richard W. Hamming Medal, named after him, is an award given annually by the Institute of Electrical and Electronics Engineers (IEEE), for "exceptional contributions to information sciences, systems and technology", and he was the first recipient of this medal. The reverse side of the medal depicts a Hamming parity check matrix for a Hamming error-correcting code.
Bibliography
; second edition 1973
; Hemisphere Pub. Corp reprint 1989; Dover reprint 2012
; second edition 1983; third edition 1989.
; second edition 1986.
Unconventional introductory textbook which attempts to both teach calculus and give some idea of what it is good for at the same time. Might be of special interest to someone teaching an introductory calculus course using a conventional textbook in order to pick up some new pedagogical viewpoints.
Entertaining and instructive. Hamming tries to extract general lessons—both personal and technical – to aid one in having a successful technical career by telling stories from his own experiences.
Notes
References
External links
1915 births
1998 deaths
20th-century American mathematicians
American information theorists
Coding theorists
Naval Postgraduate School faculty
Numerical analysts
Manhattan Project people
Turing Award laureates
Fellows of the Association for Computing Machinery
Presidents of the Association for Computing Machinery
Fellow Members of the IEEE
University of Chicago alumni
University of Illinois at Urbana–Champaign alumni
University of Nebraska–Lincoln alumni
City College of New York faculty
Scientists from Chicago
University of Louisville faculty
Mathematicians from Illinois |
68427617 | https://en.wikipedia.org/wiki/Penril%20DataComm%20Networks%20Inc. | Penril DataComm Networks Inc. | Penril DataComm Networks Inc.
was a computer telecommunications hardware company that made some acquisitions and was eventually split into two parts: one was acquired by Bay Networks and the other was a newly formed company named Access Beyond. The focus of both company's products was end-to-end data transfer. By the mid-1990s, with the popularization of the internet, this was no longer of wide interest.
History
Penril, whose earnings reports and other financials were followed by The New York Times in the 1990s, made several acquisitions but also grew internally. Following its Datability acquisition it renamed itself Penril Datability Networks.
By the time the 1968-founded Penril was acquired by Bay their name was Penril DataComm Networks. The company, which as of 1985 "had made 14 acquisitions in 12 years," also had done extensive work regarding quality control, and leveraged their product line by what The Washington Post called clever packaging: "software, cables, instructions and telephone support" sold to those less technically skilled as "Network in a Box."
Datability
Datability Software Systems Inc. was the initial name of what by 1991 became Datability, Inc., "a manufacturer of hardware that links computer networks." The 1977-founded firm began as a software consulting company, especially in the area of databases. To speed up project development they built a program generator, which they marketed as Control 10/20 (targeted at users of Digital Equipment Corporation's DECsystem-10 and DECSYSTEM-20). After trying their hand at time-sharing they built hardware to enhance bridging these computers to DEC's VAX product line. In particular they focused on Digital's LAT protocol, selling "boxes" that reimplemented the protocol, at a lower price than DEC's. They later expanded into other areas of telecommunications hardware The firm relocated to a larger manufacturing plant in 1991 and was acquired by Penril in 1993.
Access Beyond
Access Beyond was initially housed by Penril, from which it was spun off. A securities analyst noted
that Access began operations with no debt. They subsequently merged with Hayes Corporation. Some of the funds brought to the merger came from a sale by Penril of two of its divisions, each bringing about $4 million.
Ron Howard
Ron Howard, founder of Datability, became part of Penril when the latter acquired the former, and was CEO of Access Beyond when it was spun off by Penril. Access merged with Hayes Microcomputer Products and was renamed Hayes Corp, at which time Howard became executive VP of business development and corporate vice chairman of Hayes.
People
In the matter of hiring immigrants, in an industry where recent arrivals came from a culture of six day work weeks, and subcontracting was then common, these assembly line workers at Penril comprised about 25%, compared to double in other firms. Placement was overseen by government agencies.
ControversyPenril had a joint development agreement, beginning in 1990, with a Standard Microsystems Corporation (SMSC) subsidiary. A dispute arose, and the matter was brought to court. Penril'' was awarded $3.5 million in 1996.
References
Communication software
Data management
History of software
Software companies of the United_States
History of telecommunications
History of computing hardware |
48563 | https://en.wikipedia.org/wiki/Air%20traffic%20control | Air traffic control | Air traffic control (ATC) is a service provided by ground-based air traffic controllers who direct aircraft on the ground and through a given section of controlled airspace, and can provide advisory services to aircraft in non-controlled airspace. The primary purpose of ATC worldwide is to prevent collisions, organize and expedite the flow of air traffic, and provide information and other support for pilots. In some countries, ATC plays a security or defensive role, or is operated by the military.
Air traffic controllers monitor the location of aircraft in their assigned airspace by radar and communicate with the pilots by radio. To prevent collisions, ATC enforces traffic separation rules, which ensure each aircraft maintains a minimum amount of empty space around it at all times. In many countries, ATC provides services to all private, military, and commercial aircraft operating within its airspace. Depending on the type of flight and the class of airspace, ATC may issue instructions that pilots are required to obey, or advisories (known as flight information in some countries) that pilots may, at their discretion, disregard. The pilot in command is the final authority for the safe operation of the aircraft and may, in an emergency, deviate from ATC instructions to the extent required to maintain safe operation of their aircraft.
Language
Pursuant to requirements of the International Civil Aviation Organization (ICAO), ATC operations are conducted either in the English language or the language used by the station on the ground. In practice, the native language for a region is normally used; however, English must be used upon request.
History
In 1920, Croydon Airport, London was the first airport in the world to introduce air traffic control. The "aerodrome control tower" was actually a wooden hut high with windows on all four sides. It was commissioned on February 25, 1920 and provided basic traffic, weather and location information to pilots.
In the United States, air traffic control developed three divisions. The first of air mail radio stations (AMRS) was created in 1922 after World War I when the U.S. Post Office began using techniques developed by the Army to direct and track the movements of reconnaissance aircraft. Over time, the AMRS morphed into flight service stations. Today's flight service stations do not issue control instructions, but provide pilots with many other flight related informational services. They do relay control instructions from ATC in areas where flight service is the only facility with radio or phone coverage. The first airport traffic control tower, regulating arrivals, departures and surface movement of aircraft at a specific airport, opened in Cleveland in 1930. Approach/departure control facilities were created after adoption of radar in the 1950s to monitor and control the busy airspace around larger airports. The first air route traffic control center (ARTCC), which directs the movement of aircraft between departure and destination, was opened in Newark in 1935, followed in 1936 by Chicago and Cleveland. Currently in the U.S., the Federal Aviation Administration (FAA) operates 22 ARTCCs.
After the 1956 Grand Canyon mid-air collision, killing all 128 on board, the FAA was given the air-traffic responsibility over the United States in 1958, and this was followed by other countries.
In 1960, Britain, France, Germany and the Benelux countries set up Eurocontrol, intending to merge their airspaces.
The first and only attempt to pool controllers between countries is the Maastricht Upper Area Control Centre (MUAC), founded in 1972 by Eurocontrol and covering Belgium, Luxembourg, the Netherlands and north-western Germany. In 2001, the EU aimed to create a "Single European Sky", hoping to boost efficiency and gain economies of scale.
Airport traffic control tower
The primary method of controlling the immediate airport environment is visual observation from the airport control tower. The tower is a tall, windowed structure located on the airport grounds. Air traffic controllers are responsible for the separation and efficient movement of aircraft and vehicles operating on the taxiways and runways of the airport itself, and aircraft in the air near the airport, generally 5 to 10 nautical miles (9 to 18 km) depending on the airport procedures. A controller must carry out the job by means of the precise and effective application of rules and procedures that, however, need flexible adjustments according to differing circumstances, often under time pressure. In a study which compared stress in the general population and in this kind of systems markedly showed more stress level for controllers. This variation can be explained, at least in part, by the characteristics of the job.
Surveillance displays are also available to controllers at larger airports to assist with controlling air traffic. Controllers may use a radar system called secondary surveillance radar for airborne traffic approaching and departing. These displays include a map of the area, the position of various aircraft, and data tags that include aircraft identification, speed, altitude, and other information described in local procedures. In adverse weather conditions the tower controllers may also use Surface Movement Radar (SMR), Surface Movement Guidance and Control System (SMGCS) or Advanced Surface Movement Guidance and Control System (ASMGCS) to control traffic on the maneuvering area (taxiways and runway).
The areas of responsibility for tower controllers fall into three general operational disciplines: local control or air control, ground control, and flight data / clearance delivery—other categories, such as Apron control or ground movement planner, may exist at extremely busy airports. While each tower may have unique airport-specific procedures, such as multiple teams of controllers ('crews') at major or complex airports with multiple runways, the following provides a general concept of the delegation of responsibilities within the tower environment.
Remote and virtual tower (RVT) is a system based on air traffic controllers being located somewhere other than at the local airport tower and still able to provide air traffic control services. Displays for the air traffic controllers may be live video, synthetic images based on surveillance sensor data, or both.
Ground control
Ground control (sometimes known as ground movement control) is responsible for the airport "movement" areas, as well as areas not released to the airlines or other users. This generally includes all taxiways, inactive runways, holding areas, and some transitional aprons or intersections where aircraft arrive, having vacated the runway or departure gate. Exact areas and control responsibilities are clearly defined in local documents and agreements at each airport. Any aircraft, vehicle, or person walking or working in these areas is required to have clearance from ground control. This is normally done via VHF/UHF radio, but there may be special cases where other procedures are used. Aircraft or vehicles without radios must respond to ATC instructions via aviation light signals or else be led by vehicles with radios. People working on the airport surface normally have a communications link through which they can communicate with ground control, commonly either by handheld radio or even cell phone. Ground control is vital to the smooth operation of the airport, because this position impacts the sequencing of departure aircraft, affecting the safety and efficiency of the airport's operation.
Some busier airports have surface movement radar (SMR), such as, ASDE-3, AMASS or ASDE-X, designed to display aircraft and vehicles on the ground. These are used by ground control as an additional tool to control ground traffic, particularly at night or in poor visibility. There are a wide range of capabilities on these systems as they are being modernized. Older systems will display a map of the airport and the target. Newer systems include the capability to display higher quality mapping, radar target, data blocks, and safety alerts, and to interface with other systems such as digital flight strips.
Air control or local control
Air control (known to pilots as "tower" or "tower control") is responsible for the active runway surfaces. Air control clears aircraft for takeoff or landing, ensuring that prescribed runway separation will exist at all times. If the air controller detects any unsafe conditions, a landing aircraft may be instructed to "go-around" and be re-sequenced into the landing pattern. This re-sequencing will depend on the type of flight and may be handled by the air controller, approach or terminal area controller.
Within the tower, a highly disciplined communications process between air control and ground control is an absolute necessity. Air control must ensure that ground control is aware of any operations that will impact the taxiways, and work with the approach radar controllers to create "gaps" in the arrival traffic to allow taxiing traffic to cross runways and to allow departing aircraft to take off. Ground control need to keep the air controllers aware of the traffic flow towards their runways in order to maximise runway utilisation through effective approach spacing. Crew resource management (CRM) procedures are often used to ensure this communication process is efficient and clear. Within ATC, it is usually known as TRM (Team Resource Management) and the level of focus on TRM varies within different ATC organisations.
Flight data and clearance delivery
Clearance delivery is the position that issues route clearances to aircraft, typically before they commence taxiing. These clearances contain details of the route that the aircraft is expected to fly after departure. Clearance delivery or, at busy airports, Ground Movement Planner (GMP) or Traffic Management Coordinator (TMC) will, if necessary, coordinate with the relevant radar center or flow control unit to obtain releases for aircraft. At busy airports, these releases are often automatic and are controlled by local agreements allowing "free-flow" departures. When weather or extremely high demand for a certain airport or airspace becomes a factor, there may be ground "stops" (or "slot delays") or re-routes may be necessary to ensure the system does not get overloaded. The primary responsibility of clearance delivery is to ensure that the aircraft have the correct aerodrome information, such as weather and airport conditions, the correct route after departure and time restrictions relating to that flight. This information is also coordinated with the relevant radar center or flow control unit and ground control in order to ensure that the aircraft reaches the runway in time to meet the time restriction provided by the relevant unit. At some airports, clearance delivery also plans aircraft push-backs and engine starts, in which case it is known as the Ground Movement Planner (GMP): this position is particularly important at heavily congested airports to prevent taxiway and apron gridlock.
Flight data (which is routinely combined with clearance delivery) is the position that is responsible for ensuring that both controllers and pilots have the most current information: pertinent weather changes, outages, airport ground delays/ground stops, runway closures, etc. Flight data may inform the pilots using a recorded continuous loop on a specific frequency known as the automatic terminal information service (ATIS).
Approach and terminal control
Many airports have a radar control facility that is associated with the airport. In most countries, this is referred to as terminal control and abbreviated to TMC; in the U.S., it is referred to as a TRACON (terminal radar approach control). While every airport varies, terminal controllers usually handle traffic in a radius from the airport. Where there are many busy airports close together, one consolidated terminal control center may service all the airports. The airspace boundaries and altitudes assigned to a terminal control center, which vary widely from airport to airport, are based on factors such as traffic flows, neighboring airports and terrain. A large and complex example was the London Terminal Control Centre, which controlled traffic for five main London airports up to and out to .
Terminal controllers are responsible for providing all ATC services within their airspace. Traffic flow is broadly divided into departures, arrivals, and overflights. As aircraft move in and out of the terminal airspace, they are handed off to the next appropriate control facility (a control tower, an en-route control facility, or a bordering terminal or approach control). Terminal control is responsible for ensuring that aircraft are at an appropriate altitude when they are handed off, and that aircraft arrive at a suitable rate for landing.
Not all airports have a radar approach or terminal control available. In this case, the en-route center or a neighboring terminal or approach control may co-ordinate directly with the tower on the airport and vector inbound aircraft to a position from where they can land visually. At some of these airports, the tower may provide a non-radar procedural approach service to arriving aircraft handed over from a radar unit before they are visual to land. Some units also have a dedicated approach unit which can provide the procedural approach service either all the time or for any periods of radar outage for any reason.
In the U.S., TRACONs are additionally designated by a three-digit alphanumeric code. For example, the Chicago TRACON is designated C90.
Area control center/en-route center
ATC provides services to aircraft in flight between airports as well. Pilots fly under one of two sets of rules for separation: visual flight rules (VFR) or instrument flight rules (IFR). Air traffic controllers have different responsibilities to aircraft operating under the different sets of rules. While IFR flights are under positive control, in the US and Canada VFR pilots can request flight following, which provides traffic advisory services on a time permitting basis and may also provide assistance in avoiding areas of weather and flight restrictions, as well as allowing pilots into the ATC system prior to the need to a clearance into certain airspace. Across Europe, pilots may request for a "Flight Information Service", which is similar to flight following. In the UK it is known as a "basic service".
En-route air traffic controllers issue clearances and instructions for airborne aircraft, and pilots are required to comply with these instructions. En-route controllers also provide air traffic control services to many smaller airports around the country, including clearance off of the ground and clearance for approach to an airport. Controllers adhere to a set of separation standards that define the minimum distance allowed between aircraft. These distances vary depending on the equipment and procedures used in providing ATC services.
General characteristics
En-route air traffic controllers work in facilities called air traffic control centers, each of which is commonly referred to as a "center". The United States uses the equivalent term air route traffic control center. Each center is responsible for a given flight information region (FIR). Each flight information region covers many thousands of square miles of airspace and the airports within that airspace. Centers control IFR aircraft from the time they depart from an airport or terminal area's airspace to the time they arrive at another airport or terminal area's airspace. Centers may also "pick up" VFR aircraft that are already airborne and integrate them into the system. These aircraft must continue under VFR flight rules until the center provides a clearance.
Center controllers are responsible for issuing instructions to pilots to climb their aircraft to their assigned altitude while, at the same time, ensuring that the aircraft is properly separated from all other aircraft in the immediate area. Additionally, the aircraft must be placed in a flow consistent with the aircraft's route of flight. This effort is complicated by crossing traffic, severe weather, special missions that require large airspace allocations, and traffic density. When the aircraft approaches its destination, the center is responsible for issuing instructions to pilots so that they will meet altitude restrictions by specific points, as well as providing many destination airports with a traffic flow, which prohibits all of the arrivals being "bunched together". These "flow restrictions" often begin in the middle of the route, as controllers will position aircraft landing in the same destination so that when the aircraft are close to their destination they are sequenced.
As an aircraft reaches the boundary of a center's control area it is "handed off" or "handed over" to the next area control center. In some cases this "hand-off" process involves a transfer of identification and details between controllers so that air traffic control services can be provided in a seamless manner; in other cases local agreements may allow "silent handovers" such that the receiving center does not require any co-ordination if traffic is presented in an agreed manner. After the hand-off, the aircraft is given a frequency change and begins talking to the next controller. This process continues until the aircraft is handed off to a terminal controller ("approach").
Radar coverage
Since centers control a large airspace area, they will typically use long range radar that has the capability, at higher altitudes, to see aircraft within of the radar antenna. They may also use radar data to control when it provides a better "picture" of the traffic or when it can fill in a portion of the area not covered by the long range radar.
In the U.S. system, at higher altitudes, over 90% of the U.S. airspace is covered by radar and often by multiple radar systems; however, coverage may be inconsistent at lower altitudes used by aircraft due to high terrain or distance from radar facilities. A center may require numerous radar systems to cover the airspace assigned to them, and may also rely on pilot position reports from aircraft flying below the floor of radar coverage. This results in a large amount of data being available to the controller. To address this, automation systems have been designed that consolidate the radar data for the controller. This consolidation includes eliminating duplicate radar returns, ensuring the best radar for each geographical area is providing the data, and displaying the data in an effective format.
Centers also exercise control over traffic travelling over the world's ocean areas. These areas are also flight information regions (FIRs). Because there are no radar systems available for oceanic control, oceanic controllers provide ATC services using procedural control. These procedures use aircraft position reports, time, altitude, distance, and speed to ensure separation. Controllers record information on flight progress strips and in specially developed oceanic computer systems as aircraft report positions. This process requires that aircraft be separated by greater distances, which reduces the overall capacity for any given route. See for example the North Atlantic Track system.
Some air navigation service providers (e.g., Airservices Australia, the U.S. Federal Aviation Administration, Nav Canada, etc.) have implemented automatic dependent surveillance – broadcast (ADS-B) as part of their surveillance capability. This new technology reverses the radar concept. Instead of radar "finding" a target by interrogating the transponder, the ADS-B equipped aircraft sends a position report as determined by the navigation equipment on board the aircraft. ADS-C is another mode of automatic dependent surveillance, however ADS-C operates in the "contract" mode where the aircraft reports a position, automatically or initiated by the pilot, based on a predetermined time interval. It is also possible for controllers to request more frequent reports to more quickly establish aircraft position for specific reasons. However, since the cost for each report is charged by the ADS service providers to the company operating the aircraft, more frequent reports are not commonly requested except in emergency situations. ADS-C is significant because it can be used where it is not possible to locate the infrastructure for a radar system (e.g., over water). Computerized radar displays are now being designed to accept ADS-C inputs as part of the display. This technology is currently used in portions of the North Atlantic and the Pacific by a variety of states who share responsibility for the control of this airspace.
Precision approach radars (PAR) are commonly used by military controllers of air forces of several countries, to assist the pilot in final phases of landing in places where instrument landing system and other sophisticated airborne equipment are unavailable to assist the pilots in marginal or near zero visibility conditions. This procedure is also called talkdowns.
A radar archive system (RAS) keeps an electronic record of all radar information, preserving it for a few weeks. This information can be useful for search and rescue. When an aircraft has 'disappeared' from radar screens, a controller can review the last radar returns from the aircraft to determine its likely position. For example, see this crash report. RAS is also useful to technicians who are maintaining radar systems.
Flight traffic mapping
The mapping of flights in real-time is based on the air traffic control system, and volunteer ADS-B receivers. In 1991, data on the location of aircraft was made available by the Federal Aviation Administration to the airline industry. The National Business Aviation Association (NBAA), the General Aviation Manufacturers Association, the Aircraft Owners and Pilots Association, the Helicopter Association International, and the National Air Transportation Association petitioned the FAA to make ASDI information available on a "need-to-know" basis. Subsequently, NBAA advocated the broad-scale dissemination of air traffic data. The Aircraft Situational Display to Industry (ASDI) system now conveys up-to-date flight information to the airline industry and the public. Some companies that distribute ASDI information are FlightExplorer, FlightView, and FlyteComm. Each company maintains a website that provides free updated information to the public on flight status. Stand-alone programs are also available for displaying the geographic location of airborne IFR (instrument flight rules) air traffic anywhere in the FAA air traffic system. Positions are reported for both commercial and general aviation traffic. The programs can overlay air traffic with a wide selection of maps such as, geo-political boundaries, air traffic control center boundaries, high altitude jet routes, satellite cloud and radar imagery.
flight permit mapping
Problems
Traffic
The day-to-day problems faced by the air traffic control system are primarily related to the volume of air traffic demand placed on the system and weather. Several factors dictate the amount of traffic that can land at an airport in a given amount of time. Each landing aircraft must touch down, slow, and exit the runway before the next crosses the approach end of the runway. This process requires at least one and up to four minutes for each aircraft. Allowing for departures between arrivals, each runway can thus handle about 30 arrivals per hour. A large airport with two arrival runways can handle about 60 arrivals per hour in good weather. Problems begin when airlines schedule more arrivals into an airport than can be physically handled, or when delays elsewhere cause groups of aircraft – that would otherwise be separated in time – to arrive simultaneously. Aircraft must then be delayed in the air by holding over specified locations until they may be safely sequenced to the runway. Up until the 1990s, holding, which has significant environmental and cost implications, was a routine occurrence at many airports. Advances in computers now allow the sequencing of planes hours in advance. Thus, planes may be delayed before they even take off (by being given a "slot"), or may reduce speed in flight and proceed more slowly thus significantly reducing the amount of holding.
Air traffic control errors occur when the separation (either vertical or horizontal) between airborne aircraft falls below the minimum prescribed separation set (for the domestic United States) by the US Federal Aviation Administration. Separation minimums for terminal control areas (TCAs) around airports are lower than en-route standards. Errors generally occur during periods following times of intense activity, when controllers tend to relax and overlook the presence of traffic and conditions that lead to loss of minimum separation.
Weather
Beyond runway capacity issues, the weather is a major factor in traffic capacity. Rain, ice, snow or hail on the runway cause landing aircraft to take longer to slow and exit, thus reducing the safe arrival rate and requiring more space between landing aircraft. Fog also requires a decrease in the landing rate. These, in turn, increase airborne delay for holding aircraft. If more aircraft are scheduled than can be safely and efficiently held in the air, a ground delay program may be established, delaying aircraft on the ground before departure due to conditions at the arrival airport.
In Area Control Centers, a major weather problem is thunderstorms, which present a variety of hazards to aircraft. Aircraft will deviate around storms, reducing the capacity of the en-route system by requiring more space per aircraft or causing congestion as many aircraft try to move through a single hole in a line of thunderstorms. Occasionally weather considerations cause delays to aircraft prior to their departure as routes are closed by thunderstorms.
Much money has been spent on creating software to streamline this process. However, at some ACCs, air traffic controllers still record data for each flight on strips of paper and personally coordinate their paths. In newer sites, these flight progress strips have been replaced by electronic data presented on computer screens. As new equipment is brought in, more and more sites are upgrading away from paper flight strips.
Congestion
Constrained control capacity and growing traffic lead to flight cancellation and delays:
In America, delays caused by ATC grew by 69% between 2012 and 2017.
In China, the average delay per domestic flight spiked by 50% in 2017 to 15 minutes per flight.
In Europe, en route delays grew by 105% in 2018, due to a lack of capacity or staff (60%), weather (25%) or strikes (14%), costing the European economy €17.6bn ($20.8bn), up by 28% on 2017.
By then the market for air-traffic services was worth $14bn.
More efficient ATC could save 5-10% of aviation fuel by avoiding holding patterns and indirect airways.
The military takes 80% of Chinese air space, congesting the thin corridors open to airliners.
Britain is closing military air space only during air-force exercises.
Callsigns
A prerequisite to safe air traffic separation is the assignment and use of distinctive call signs. These are permanently allocated by ICAO on request usually to scheduled flights and some air forces and other military services for military flights. There are written callsigns with a 3-letter combination followed by the flight number such as AAL872 or VLG1011. As such they appear on flight plans and ATC radar labels. There are also the audio or Radiotelephony callsigns used on the radio contact between pilots and air traffic control. These are not always identical to their written counterparts. An example of an audio callsign would be "Speedbird 832", instead of the written "BAW832". This is used to reduce the chance of confusion between ATC and the aircraft. By default, the callsign for any other flight is the registration number (tail number) of the aircraft, such as "N12345", "C-GABC" or "EC-IZD". The short Radiotelephony callsigns for these tail numbers is the last 3 letters using the NATO phonetic alphabet (i.e. ABC spoken alpha-bravo-charlie for C-GABC) or the last 3 numbers (i.e. three-four-five for N12345). In the United States, the prefix may be an aircraft type, model or manufacturer in place of the first registration character, for example, "N11842" could become "Cessna 842". This abbreviation is only allowed after communications have been established in each sector.
Before around 1980 International Air Transport Association (IATA) and ICAO were using the same 2-letter callsigns. Due to the larger number of new airlines after deregulation, ICAO established the 3-letter callsigns as mentioned above. The IATA callsigns are currently used in aerodromes on the announcement tables but are no longer used in air traffic control. For example, AA is the IATA callsign for American Airlines – ATC equivalent AAL. Flight numbers in regular commercial flights are designated by the aircraft operator and identical callsign might be used for the same scheduled journey each day it is operated, even if the departure time varies a little across different days of the week. The callsign of the return flight often differs only by the final digit from the outbound flight. Generally, airline flight numbers are even if eastbound, and odd if westbound. In order to reduce the possibility of two callsigns on one frequency at any time sounding too similar, a number of airlines, particularly in Europe, have started using alphanumeric callsigns that are not based on flight numbers (i.e. DLH23LG, spoken as Lufthansa-two-three-lima-golf, to prevent confusion between incoming DLH23 and outgoing DLH24 in the same frequency). Additionally, it is the right of the air traffic controller to change the 'audio' callsign for the period the flight is in his sector if there is a risk of confusion, usually choosing the tail number instead.
Technology
Much of ATC still relies on WWII technologies:
radar localisation (though satellite navigation is cheaper and more accurate)
Two-way radio communication (instead of Controller–pilot data link communications like at the MUAC)
In America, controllers hand each other paper flight progress strips.
Many technologies are used in air traffic control systems. Primary and secondary radar are used to enhance a controller's situation awareness within his assigned airspace – all types of aircraft send back primary echoes of varying sizes to controllers' screens as radar energy is bounced off their skins, and transponder-equipped aircraft reply to secondary radar interrogations by giving an ID (Mode A), an altitude (Mode C) and/or a unique callsign (Mode S). Certain types of weather may also register on the radar screen.
These inputs, added to data from other radars, are correlated to build the air situation. Some basic processing occurs on the radar tracks, such as calculating ground speed and magnetic headings.
Usually, a flight data processing system manages all the flight plan related data, incorporating – in a low or high degree – the information of the track once the correlation between them (flight plan and track) is established. All this information is distributed to modern operational display systems, making it available to controllers.
The FAA has spent over US$3 billion on software, but a fully automated system is still over the horizon. In 2002 the UK brought a new area control centre into service at the London Area Control Centre, Swanwick, Hampshire, relieving a busy suburban centre at West Drayton, Middlesex, north of London Heathrow Airport. Software from Lockheed-Martin predominates at the London Area Control Centre. However, the centre was initially troubled by software and communications problems causing delays and occasional shutdowns.
Some tools are available in different domains to help the controller further:
Flight data processing systems: this is the system (usually one per center) that processes all the information related to the flight (the flight plan), typically in the time horizon from gate to gate (airport departure/arrival gates). It uses such processed information to invoke other flight plan related tools (such as e.g. MTCD), and distributes such processed information to all the stakeholders (air traffic controllers, collateral centers, airports, etc.).
Short-term conflict alert (STCA) that checks possible conflicting trajectories in a time horizon of about 2 or 3 minutes (or even less in approach context – 35 seconds in the French Roissy & Orly approach centres) and alerts the controller prior to the loss of separation. The algorithms used may also provide in some systems a possible vectoring solution, that is, the manner in which to turn, descend, increase/decrease speed, or climb the aircraft in order to avoid infringing the minimum safety distance or altitude clearance.
Minimum safe altitude warning (MSAW): a tool that alerts the controller if an aircraft appears to be flying too low to the ground or will impact terrain based on its current altitude and heading.
System coordination (SYSCO) to enable controller to negotiate the release of flights from one sector to another.
Area penetration warning (APW) to inform a controller that a flight will penetrate a restricted area.
Arrival and departure manager to help sequence the takeoff and landing of aircraft.
The departure manager (DMAN): A system aid for the ATC at airports, that calculates a planned departure flow with the goal to maintain an optimal throughput at the runway, reduce queuing at holding point and distribute the information to various stakeholders at the airport (i.e. the airline, ground handling and air traffic control (ATC)).
The arrival manager (AMAN): A system aid for the ATC at airports, that calculates a planned arrival flow with the goal to maintain an optimal throughput at the runway, reduce arrival queuing and distribute the information to various stakeholders.
Passive final approach spacing tool (pFAST), a CTAS tool, provides runway assignment and sequence number advisories to terminal controllers to improve the arrival rate at congested airports. pFAST was deployed and operational at five US TRACONs before being cancelled. NASA research included an active FAST capability that also provided vector and speed advisories to implement the runway and sequence advisories.
Converging runway display aid (CRDA) enables approach controllers to run two final approaches that intersect and make sure that go arounds are minimized.
Center TRACON automation system (CTAS) is a suite of human centered decision support tools developed by NASA Ames Research Center. Several of the CTAS tools have been field tested and transitioned to the FAA for operational evaluation and use. Some of the CTAS tools are: traffic management advisor (TMA), passive final approach spacing tool (pFAST), collaborative arrival planning (CAP), direct-to (D2), en route descent advisor (EDA) and multi-center TMA. The software is running on Linux.
Traffic management advisor (TMA), a CTAS tool, is an en route decision support tool that automates time based metering solutions to provide an upper limit of aircraft to a TRACON from the center over a set period of time. Schedules are determined that will not exceed the specified arrival rate and controllers use the scheduled times to provide the appropriate delay to arrivals while in the en route domain. This results in an overall reduction in en route delays and also moves the delays to more efficient airspace (higher altitudes) than occur if holding near the TRACON boundary, which is required in order to prevent overloading the TRACON controllers. TMA is operational at most en route air route traffic control centers (ARTCCs) and continues to be enhanced to address more complex traffic situations (e.g. adjacent center metering (ACM) and en route departure capability (EDC))
MTCD & URET
In the US, user request evaluation tool (URET) takes paper strips out of the equation for en route controllers at ARTCCs by providing a display that shows all aircraft that are either in or currently routed into the sector.
In Europe, several MTCD tools are available: iFACTS (NATS), VAFORIT (DFS), new FDPS (MUAC). The SESAR programme should soon launch new MTCD concepts.
URET and MTCD provide conflict advisories up to 30 minutes in advance and have a suite of assistance tools that assist in evaluating resolution options and pilot requests.
Mode S: provides a data downlink of flight parameters via secondary surveillance radars allowing radar processing systems and therefore controllers to see various data on a flight, including airframe unique id (24-bits encoded), indicated airspeed and flight director selected level, amongst others.
CPDLC: controller-pilot data link communications – allows digital messages to be sent between controllers and pilots, avoiding the need to use radiotelephony. It is especially useful in areas where difficult-to-use HF radiotelephony was previously used for communication with aircraft, e.g. oceans. This is currently in use in various parts of the world including the Atlantic and Pacific oceans.
ADS-B: automatic dependent surveillance broadcast – provides a data downlink of various flight parameters to air traffic control systems via the transponder (1090 MHz) and reception of those data by other aircraft in the vicinity. The most important is the aircraft's latitude, longitude and level: such data can be utilized to create a radar-like display of aircraft for controllers and thus allows a form of pseudo-radar control to be done in areas where the installation of radar is either prohibitive on the grounds of low traffic levels, or technically not feasible (e.g. oceans). This is currently in use in Australia, Canada and parts of the Pacific Ocean and Alaska.
The electronic flight strip system (e-strip):
A system of electronic flight strips replacing the old paper strips is being used by several service providers, such as Nav Canada, MASUAC, DFS, DECEA. E-strips allows controllers to manage electronic flight data online without paper strips, reducing the need for manual functions, creating new tools and reducing the ATCO's workload. The firsts electronic flight strips systems were independently and simultaneously invented and implemented by Nav Canada and Saipher ATC in 1999. The Nav Canada system known as EXCDS and rebranded in 2011 to NAVCANstrips and Saipher's first generation system known as SGTC, which is now being updated by its 2nd generation system, the TATIC TWR. DECEA in Brazil is the world's largest user of tower e-strips system, ranging from very small airports up to the busiest ones, taking the advantage of real time information and data collection from each of more than 150 sites for use in air traffic flow management (ATFM), billing and statistics.
Screen content recording: Hardware or software based recording function which is part of most modern automation system and that captures the screen content shown to the ATCO. Such recordings are used for a later replay together with audio recording for investigations and post event analysis.
Communication navigation surveillance / air traffic management (CNS/ATM) systems are communications, navigation, and surveillance systems, employing digital technologies, including satellite systems together with various levels of automation, applied in support of a seamless global air traffic management system.
Air navigation service providers (ANSPs) and air traffic service providers (ATSPs)
Azerbaijan – AzərAeroNaviqasiya
Albania – Albcontrol
Algeria – Etablissement National de la Navigation Aérienne (ENNA)
Argentina - Empresa Argentina de Navegación Aérea (EANA)
Armenia – Armenian Air Traffic Services (ARMATS)
Australia – Airservices Australia (Government owned Corporation) and Royal Australian Air Force
Austria – Austro Control
Bangladesh- Civil Aviation Authority, Bangladesh
Belarus – Republican Unitary Enterprise "Белаэронавигация (Belarusian Air Navigation)"
Belgium – Skeyes - Authority of Airways
Bosnia and Herzegovina – Agencija za pružanje usluga u zračnoj plovidbi (Bosnia and Herzegovina Air Navigation Services Agency)
Brazil – Departamento de Controle do Espaço Aéreo (ATC/ATM Authority) and ANAC – Agência Nacional de Aviação Civil (Civil Aviation Authority)
Bulgaria – Air Traffic Services Authority
Cambodia – Cambodia Air Traffic Services (CATS)
Canada – Nav Canada – formerly provided by Transport Canada and Canadian Forces
Cayman Islands – CIAA Cayman Islands Airports Authority
Central America – Corporación Centroamericana de Servicios de Navegación Aérea
Guatemala – Dirección General de Aeronáutica Civil (DGAC)
El Salvador
Honduras
Nicaragua – Empresa Administradora Aeropuertos Internacionales (EAAI)
Costa Rica – Dirección General de Aviación Civil
Belize
Chile – Dirección General de Aeronáutica Civil (DGAC)
Colombia – Aeronáutica Civil Colombiana (UAEAC)
Croatia – Hrvatska kontrola zračne plovidbe (Croatia Control Ltd.)
Cuba – Instituto de Aeronáutica Civil de Cuba (IACC)
Czech Republic – Řízení letového provozu ČR
Cyprus - Department of Civil Aviation
Denmark – Naviair (Danish ATC)
Dominican Republic – Instituto Dominicano de Aviación Civil (IDAC) "Dominican Institute of Civil Aviation"
Eastern Caribbean – Eastern Caribbean Civil Aviation Authority (ECCAA)
Anguilla
Antigua and Barbuda
British Virgin Islands
Dominica
Grenada
Saint Kitts and Nevis
Saint Lucia
Saint Vincent and the Grenadines
Ecuador – Dirección General de Aviación Civil (DGAC) "General Direction of Civil Aviation" Government Body
Estonia – Estonian Air Navigation Services
Europe – Eurocontrol (European Organisation for the Safety of Air Navigation)
Fiji - Fiji Airports (fully owned Government Commercial Company)
Finland – Finavia
France – Direction Générale de l'Aviation Civile (DGAC) : Direction des Services de la Navigation Aérienne (DSNA) (Government body)
Georgia – SAKAERONAVIGATSIA, Ltd. (Georgian Air Navigation)
Germany – Deutsche Flugsicherung (German ATC – State-owned company)
Greece – Hellenic Civil Aviation Authority (HCAA)
Hong Kong – Civil Aviation Department (CAD)
Hungary – HungaroControl Magyar Légiforgalmi Szolgálat Zrt. (HungaroControl Hungarian Air Navigation Services Pte. Ltd. Co.)
Iceland – ISAVIA
Indonesia – AirNav Indonesia
Iran - Iran Civil Aviation Organization (ICAO)
Ireland – Irish Aviation Authority (IAA)
India – Airports Authority of India (AAI) (under Ministry of Civil Aviation, Government of India and Indian Air Force)
Iraq – Iraqi Air Navigation – ICAA
Israel – Israeli Airports Authority (IIA)
Italy – ENAV SpA and Italian Air Force
Jamaica – JCAA (Jamaica Civil Aviation Authority)
Japan – JCAB (Japan Civil Aviation Bureau)
Kenya – KCAA (Kenya Civil Aviation Authority)
Latvia – LGS (Latvian ATC)
Lithuania – ANS (Lithuanian ATC)
Luxembourg – Administration de la navigation aérienne (ANA – government administration)
Macedonia – DGCA (Macedonian ATC)
Malaysia – Civil Aviation Authority of Malaysia (CAAM)
Malta – Malta Air Traffic Services Ltd
Mexico – Servicios a la Navegación en el Espacio Aéreo Mexicano
Morocco - Office National Des Aeroports (ONDA)
Nepal – Civil Aviation Authority of Nepal
Netherlands – Luchtverkeersleiding Nederland (LVNL) (Dutch ATC) Eurocontrol (European area control ATC)
New Zealand – Airways New Zealand (State owned enterprise)
Nigeria - Nigeria Civil Aviation Authority (NCAA)
Norway – Avinor (State-owned private company)
Oman – Directorate General of Meteorology & Air Navigation (Government of Oman)
Pakistan – Civil Aviation Authority (under Government of Pakistan)
Peru – Centro de Instrucción de Aviación Civil CIAC Civil Aviation Training Center
Philippines – Civil Aviation Authority of the Philippines (CAAP) (under the Philippine Government)
Poland – Polish Air Navigation Services Agency (PANSA)
Portugal – NAV (Portuguese ATC)
Puerto Rico – Administracion Federal de Aviacion
Romania – Romanian Air Traffic Services Administration (ROMATSA)
Russia – Federal State Unitary Enterprise "State ATM Corporation"
Saudi Arabia – Saudi Air Navigation Services (SANS)
Seychelles – Seychelles Civil Aviation Authority (SCAA)
Singapore – Civil Aviation Authority of Singapore (CAAS)
Serbia – Serbia and Montenegro Air Traffic Services Agency Ltd. (SMATSA)
Slovakia – Letové prevádzkové služby Slovenskej republiky
Slovenia – Slovenia Control
South Africa – Air Traffic and Navigation Services (ATNS)
South Korea – Korea Office of Civil Aviation
Spain – AENA now AENA S.A. (Spanish Airports) and ENAIRE (ATC & ATSP)
Sri Lanka – Airport & Aviation Services (Sri Lanka) Limited (Government owned company)
Sweden – LFV (government body)
Switzerland – Skyguide
Taiwan – ANWS (Civil Aeronautical Administration)
Thailand – AEROTHAI (Aeronautical Radio of Thailand)
Trinidad and Tobago – Trinidad and Tobago Civil Aviation Authority (TTCAA)
Turkey – General Directorate of State Airports Authority (DHMI)
United Arab Emirates – General Civil Aviation Authority (GCAA)
United Kingdom – National Air Traffic Services (NATS) (49% State owned public-private partnership)
United States – Federal Aviation Administration (FAA) (government body)
Ukraine – Ukrainian State Air Traffic Service Enterprise (UkSATSE)
Venezuela – Instituto Nacional de Aeronautica Civil (INAC)
Zambia - Zambia Civil Aviation Authority (ZCAA)
Zimbabwe - Zimbabwe Civil Aviation Authority
Proposed changes
In the United States, some alterations to traffic control procedures are being examined:
The Next Generation Air Transportation System examines how to overhaul the United States national airspace system.
Free flight is a developing air traffic control method that uses no centralized control (e.g. air traffic controllers). Instead, parts of airspace are reserved dynamically and automatically in a distributed way using computer communication to ensure the required separation between aircraft.
In Europe, the SESAR (Single European Sky ATM Research) programme plans to develop new methods, technologies, procedures, and systems to accommodate future (2020 and beyond) air traffic needs.
In October 2018, European controller unions dismissed setting targets to improve ATC as "a waste of time and effort" as new technology could cut costs for users but threaten their jobs.
In April 2019, the EU called for a "Digital European Sky", focusing on cutting costs by including a common digitisation standard and allowing controllers to move to where they are needed instead of merging national ATCs, as it would not solve all problems.
Single air-traffic control services in continent-sized America and China does not alleviate congestion.
Eurocontrol tries to reduce delays by diverting flights to less busy routes: flight paths across Europe were redesigned to accommodate the new airport in Istanbul, which opened in April, but the extra capacity will be absorbed by rising demand for air travel.
Well-paid jobs in Western Europe could move east with cheaper labour.
The average Spanish controller earn over €200,000 a year, over seven times the country average salary, more than pilots, and at least ten controllers were paid over €810,000 ($1.1m) a year in 2010.
French controllers spent a cumulative nine months on strike between 2004 and 2016.
Privatization
Many countries have also privatized or corporatized their air navigation service providers. There are several models that can be used for ATC service providers. The first is to have the ATC services be part of a government agency as is currently the case in the United States. The problem with this model is that funding can be inconsistent and can disrupt the development and operation of services. Sometimes funding can disappear when lawmakers cannot approve budgets in time. Both proponents and opponents of privatization recognize that stable funding is one of the major factors for successful upgrades of ATC infrastructure. Some of the funding issues include sequestration and politicization of projects. Proponents argue that moving ATC services to a private corporation could stabilize funding over the long term which will result in more predictable planning and rollout of new technology as well as training of personnel.
Another model is to have ATC services provided by a government corporation. This model is used in Germany, where funding is obtained through user fees. Yet another model is to have a for-profit corporation operate ATC services. This is the model used in the United Kingdom, but there have been several issues with the system there including a large-scale failure in December 2014 which caused delays and cancellations and has been attributed to cost-cutting measures put in place by this corporation. In fact, earlier that year, the corporation owned by the German government won the bid to provide ATC services for Gatwick Airport in the United Kingdom. The last model, which is often the suggested model for the United States to transition to is to have a non-profit organization that would handle ATC services as is used in Canada.
The Canadian system is the one most often used as a model by proponents of privatization. Air traffic control privatization has been successful in Canada with the creation of Nav Canada, a private nonprofit organization which has reduced costs and has allowed new technologies to be deployed faster due to the elimination of much of the bureaucratic red tape. This has resulted in shorter flights and less fuel usage. It has also resulted in flights being safer due to new technology. Nav Canada is funded from fees that are collected from the airlines based on the weight of the aircraft and the distance flown.
ATC is still run by national governments with few exceptions: in the European Union, only Britain and Italy have private shareholders.
Nav Canada is an independent company allowed to borrow and can invest to boost productivity and in 2017 its cost were a third less than in America where the FAA is exposed to budget cuts and cannot borrow.
Privatisation does not guarantee lower prices: the profit margin of MUAC was 70% in 2017, as there is no competition, but governments could offer fixed terms concessions.
Australia, Fiji and New Zealand run the upper-air space for the Pacific islands' governments, like Hungary for Kosovo since 2014.
HungaroControl offers remote airport tower services from Budapest.
In America, ATC could be split from the FAA into a separate entity, supported by airlines, airports and controller unions but was opposed by the business aviation as their free ATC service would become paid.
ATC regulations in the United States
FAA control tower operators (CTO) / air traffic controllers use FAA Order 7110.65 as the authority for all procedures regarding air traffic. For more information regarding air traffic control rules and regulations, refer to the FAA's website.
See also
Air traffic service
Flight information service officer
Flight planning
ICAO recommendations on use of the International System of Units
Forward air control
Global air-traffic management
Tower en route control (TEC)
References
33^
External links
U.S. Centennial of Flight Commission – Air Traffic Control
NASA video of US air traffic
air traffic
Radar |
3228627 | https://en.wikipedia.org/wiki/Talking%20Moose | Talking Moose | The Talking Moose is an animated talking utility for the Apple Macintosh. It was created in 1986 by Canadian programmer Steven Halls. It is the first animated talking agent on a personal computer and featured a moose that would appear at periodic intervals with some joke or witticism. The moose would also comment on system events and user actions and could speak what a user typed using the Moose Proof desk accessory.
Design
According to Halls, the original purpose of the moose was to make use of the Mac's Macintalk text-to-speech engine in a novel manner. A Doonesbury strip in which the characters were commented on by a talking computer provided inspiration, and Halls found that a moose head with antlers was recognizable even on low-resolution computer screens.
The moose was the first facially animated talking agent with lip synchronization and it became the seed idea for future talking agents, such as Clippy the paperclip in Microsoft Windows, Bonzi Buddy, and Prody Parrot from Creative SoundBlaster.
The Talking Moose used Apple's Macintalk software, the first version of which famously made the original "Never trust a computer you can't lift" speech at the Macintosh launch in 1984. Apple's development of Macintalk had petered out and they granted Halls permission to use, and continue refining, the software for free. Halls did not just improve the fluidity of the speech and the reliability of the interpretation but gave the moose a library of comedic observations and wisecracks which gave it a distinctive character.
Around 1990, a version of the Talking Moose software was commercially published by Baseline Publishing. This commercial release of the Talking Moose included color graphics and additional software that allowed users to create and edit phrases to be spoken. A stripped-down version of the Baseline release of the Talking Moose was distributed with the Bob LeVitus book Stupid Mac Tricks in 1989.
In the 1990s, the Moose was rewritten by Uli Kusterer under the name Uli's Moose - for which he later obtained Steve Halls' blessing. This Moose was included in Bob LeVitus' iMac (and iBook) book "I Didn't Know You Could Do That".
Moose versions
Version 1.0 of the Talking Moose was released in 1986 by Steve Halls.
Version 2.0 was released in 1987, and ran on Macintosh systems 6.0.4 - 7.1. The Macintalk voice used for the Moose was 'Fred'.
Around 1990, Baseline Publishing commercially published the talking moose, and released version 4, introducing new characters from a "Cartoon Carnival" supposedly run by the titular ungulate.
Uli Kusterer - the next author of the moose - got rid of the cartoon carnival, and worked more in the spirit of the original moose, releasing new versions starting at 1.0, which supported Mac OS 7.1 - 9.2. These were released initially on CompuServe, and later on the internet. He also developed the first OS X native version (v 3.0). The latest Macintosh version of the Moose (v3.5.7) works with all versions of OS X, 10.3 through 10.7, and includes Universal Binaries.
From January 8, 2009, The Talking Moose has been posting periodic comments to a Twitter account. The account was banned, but has since been reinstated.
Halls then recreated the Talking Moose for Microsoft Windows. The new Moose is positive, focusing on assistance and self-help topics like weight loss, smoking cessation, and anxiety/stress reduction.
References
External links
The old Talking Moose web page by Steve Halls
The new Talking Moose web page by Steve Halls
Uli's Talking Moose for Macintosh
1986 software
Apple Inc. software
Fictional deer and moose
MacOS
Proprietary software
Novelty software |
47931716 | https://en.wikipedia.org/wiki/Laplink | Laplink | Laplink (stylized as LapLink) was a proprietary piece of software developed by Mark Eppley and sold by Traveling Software, which is now LapLink Software, Inc. First available in 1983, LapLink was used to synchronize, copy, or move, files between two PCs, in an era before local area networks, using the parallel port and a LapLink cable or serial port and a null modem cable or USB and a USB adhoc network cable.
LapLink was the predecessor to Laplink PCmover.
LapLink typically shipped with a LapLink cable to link two PCs together, enabling the transfer of files from one PC to the other using the LapLink software.
References
Backup software
File transfer software |
45124802 | https://en.wikipedia.org/wiki/Software%20intelligence | Software intelligence | Software Intelligence is insight into the structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and software inner-structure. End results are information used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about software health, prevent software catastrophes.
History
Software Intelligence has been used by Kirk Paul Lafler, an American engineer, entrepreneur, and consultant, and founder of Software Intelligence Corporation in 1979. At that time, it was mainly related to SAS activities, in which he has been an expert since 1979.
In the early 1980s, Victor R. Basili participated in different papers detailing a methodology for collecting valid software engineering data relating to software engineering, evaluation of software development, and variations.
In 2004, different software vendors in software analysis start using the terms as part of their product naming and marketing strategy. Then in 2010, Ahmed E. Hassan and Tao Xie defined Software Intelligence as a "practice offering software practitioners up-to-date and pertinent information to support their daily decision-making processes and Software Intelligence should support decision-making processes throughout the lifetime of a software system". They go on by defining Software Intelligence as a "strong impact on modern software practice" for the upcoming decades.
Capabilities
Because of the complexity and wide range of components and subjects implied in software, Software intelligence is derived from different aspects of software:
Software composition is the construction of software application components. Components result from software coding, as well as the integration of the source code from external components: Open source, 3rd party components, or frameworks. Other components can be integrated using application programming interface call to libraries or services.
Software architecture refers to the structure and organization of elements of a system, relations, and properties among them.
Software flaws designate problems that can cause security, stability, resiliency, and unexpected results. There is no standard definition of software flaws but the most accepted is from The MITRE Corporation where common flaws are cataloged as Common Weakness Enumeration.
Software grades assess attributes of the software. Historically, the classification and terminology of attributes have been derived from the ISO 9126-3 and the subsequent ISO 25000:2005 quality model.
Software economics refers to the resource evaluation of software in past, present, or future to make decisions and to govern.
Components
The capabilities of Software intelligence platforms include an increasing number of components:
Code analyzer to serve as an information basis for other Software Intelligence components identifying objects created by the programming language, external objects from Open source, third parties objects, frameworks, API, or services
Graphical visualization and blueprinting of the inner structure of the software product or application considered including dependencies, from data acquisition (automated and real-time data capture, end-user entries) up to data storage, the different layers within the software, and the coupling between all elements.
Navigation capabilities within components and impact analysis features
List of flaws, architectural and coding violations, against standardized best practices, cloud blocker preventing migration to a Cloud environment, and rogue data-call entailing the security and integrity of software
Grades or scores of the structural and software quality aligned with industry-standard like OMG, CISQ or SEI assessing the reliability, security, efficiency, maintainability, and scalability to cloud or other systems.
Metrics quantifying and estimating software economics including work effort, sizing, and technical debt
Industry references and benchmarking allowing comparisons between outputs of analysis and industry standards
User Aspect
Some considerations must be made in order to successfully integrate the usage of Software Intelligence systems in a company. Ultimately the Software Intelligence system must be accepted and utilized by the users in order for it to add value to the organization. If the system does not add value to the users' mission, they simply don't use it as stated by M. Storey in 2003.
At the code level and system representation, Software Intelligence systems must provide a different level of abstractions: an abstract view for designing, explaining and documenting and a detailed view for understanding and analyzing the software system.
At the governance level, the user acceptance for Software Intelligence covers different areas related to the inner functioning of the system as well as the output of the system. It encompasses these requirements:
Comprehensive: missing information may lead to a wrong or inappropriate decision, as well as it is a factor influencing the user acceptance of a system.
Accurate: accuracy depends on how the data is collected to ensure fair and indisputable opinion and judgment.
Precise: precision is usually judged by comparing several measurements from the same or different sources.
Scalable: lack of scalability in the software industry is a critical factor leading to failure.
Credible: outputs must be trusted and believed.
Deploy-able and usable.
Applications
Software intelligence has many applications in all businesses relating to the software environment, whether it is software for professionals, individuals, or embedded software.
Depending on the association and the usage of the components, applications will relate to:
Change and modernization: uniform documentation and blueprinting on all inner components, external code integrated, or call to internal or external components of the software
Resiliency and security: measuring against industry standards to diagnose structural flaws in an IT environment. Compliance validation regarding security, specific regulations or technical matters.
decisions making and governance: Providing analytics about the software itself or stakeholders involved in the development of the software, e.g. productivity measurement to inform business and IT leaders about progress towards business goals. Assessment and Benchmarking to help business and IT leaders to make informed, fact-based decision about software.
Marketplace
The Software Intelligence is a high-level discipline and has been gradually growing covering applications listed above. There are several markets driving the need for it:
Application Portfolio Analysis (APA) aiming at improving the enterprise performance
Software Assessment for producing software KPI and improve quality and productivity
Software security and resiliency measures and validation
Software evolution or legacy modernization, for which blueprinting the software systems are needed nor tools improving and facilitating modifications
References
Software
Data management
Source code |
56281614 | https://en.wikipedia.org/wiki/2017%20Macron%20e-mail%20leaks | 2017 Macron e-mail leaks | The 2017 Macron e-mail leaks were leaks of more than 20,000 e-mails related to the campaign of Emmanuel Macron during the 2017 French presidential elections, two days before the final vote. The leaks garnered an abundance of media attention due to how quickly news of the leak spread throughout the Internet, aided in large part by bots and spammers and drew accusations that the government of Russia under Vladimir Putin was responsible. The e-mails were shared by WikiLeaks and several American alt-right activists through social media sites like Twitter, Facebook, and 4chan.
Originally posted on a filesharing site called PasteBin, the e-mails had little to no effect on the final vote as they were dumped just hours before a 44-hour media blackout that is legally required by French electoral law.
The campaign said the e-mails had been "fraudulently obtained" and that false documents were mingled with genuine ones in order "to create confusion and misinformation." Numerama, an online publication focusing on digital life, described the leaked material as "utterly mundane", consisting of "the contents of a hard drive and several emails of co-workers and En Marche political officials." United States Senator from Virginia, Mark Warner cited the e-mail leak as a reinforcement of the cause behind the U.S. Senate Intelligence Committee's investigation into Russian interference in the 2016 United States elections. Nonetheless, the Russian government denied all allegations of foreign electoral intervention.
Background
After the first round of the 2017 French presidential election produced no majority winner, the top two candidates proceeded to a runoff election to be held on 7 May of that year. Emmanuel Macron of the En Marche! party and Marine Le Pen of the National Front both began campaigning across France on their competing points of view. The election was characterized by widespread dissatisfaction with the administration of President François Hollande and the French governmental establishment as a whole.
The election was widely regarded as a referendum between the internationalist centrism of Macron and the populist far-right ideology of Le Pen. After a slew of events considered to be detrimental to globalization and a triumph of nationalism and isolationism, such as the Brexit referendum, and the election of Donald Trump, many international observers viewed the French election as another possible trendsetting event for Western politics. Le Pen's anti-immigration, anti-NATO, and anti-European Union stances attracted her the widespread support of far-right politicians and activists as far as the United States, like Donald Trump, and raised questions about possible appeasement of Russia. , and her campaign had even secured millions of Euros from a Russian lender in 2014.
In the United States, Le Pen was praised by President Donald Trump on several occasions, and she saw widespread support and praise by large numbers of online conservative trolls and Internet alt-right activists on social media platforms like Twitter, Facebook, Reddit, and 4chan, who simultaneously attacked Macron.
These trolls used spamming of Internet memes and misinformation as tactics to assail Macron; accusing him of being a "globalist puppet" and a supporter of Islamic immigration. This was not a new strategy, it had been executed to much success during the 2016 United States presidential election. Legions of pro-Trump Internet users and bots had spammed social media and rapidly spread anti-Clinton news releases and leaks across the web as was the case with the Democratic National Committee leaks and the John Podesta e-mail leaks, allegedly with aid from the Kremlin. Prior to the election, American national security officials warned the French government of the high probability of Russian digital meddling in the election, according to the Director of the National Security Agency, Mike Rogers.
E-mail leaks
On Friday 5 May 2017, two days before the scheduled vote in the presidential election, the campaign of Emmanuel Macron claimed that it had been the target of a "massive hack". At the same time at least 9 gigabytes of data were dumped on an anonymous file sharing site called Pastebin using a profile called 'EMLEAKS'. The drop was made just hours before an election media blackout was due to take place in advance of Sunday's elections, as legally mandated under French electoral law which prevented Macron from issuing an effective response but also limited media coverage of the hack and subsequent leak. The e-mails, totaling 21,075, along with other data was quickly posted to the anonymous message board, 4chan, where it was shared by alt-right activists, notably Jack Posobiec, on Twitter who had them translated by the Québécois wing of right-wing outlet The Rebel media. It has been remarked that at that time, Rebel Media's Québécois wing consisted solely of radio personality Éric Duhaime.
The e-mail leak spread swiftly under the hashtag #MacronLeaks on Twitter and Facebook. Within three and a half hours of first being used, #MacronLeaks had reached 47,000 tweets. On Jack Posobiec's Twitter, the hashtag was retweeted 87 times within five minutes, likely pointing to the use of bots. WikiLeaks mentioned the leaks in subsequent tweets 15 times, contributing the most to the news' spread. Within a short period of time, #MacronLeaks was trending in France and was on a banner on the Drudge Report homepage. In another sign of bot use the ten most active accounts using the #MacronLeaks hashtag posted over 1,300 tweets in just over three hours. One particular account, posted 294 tweets in a span of two hours. Analysis shows that the hashtag was mentioned more times by American accounts than French ones, but posts concerning them were, by a slim margin, written more often in French than English.
The leaked e-mails were claimed to show evidence of criminal wrongdoing by Macron and his campaign including the committing of tax evasion and election fraud. A less suggestive examination of the e-mails by Numerama, a French online publication focusing on technological news, described the leaked emails as "utterly mundane", consisting of "the contents of a hard drive and several emails of co-workers and En Marche political officials." Leaked documents included "memos, bills, loans for amounts that are hardly over-the-top, recommendations and other reservations, amidst, of course, exchanges that are strictly personal and private — personal notes on the rain and sunshine, a confirmation email for the publishing of a book, reservation of a table for friends, etc."
Reaction
In response to the attack, Emmanuel Macron said it was "democratic destabilisation, like that seen during the last presidential campaign in the United States" and said the hackers had mixed falsified documents with genuine ones, "in order to sow doubt and disinformation." Vice President of the National Front Florian Philippot and Le Pen adviser said in a tweet, "Will #MacronLeaks teach us something that investigative journalism has deliberately killed?" The French election commission warned media in the country that publishing the e-mails or discussing them so close to the election would be a violation of the law and issued a statement that in part read, "On the eve of the most important election for our institutions, the commission calls on everyone present on internet sites and social networks, primarily the media, but also all citizens, to show responsibility and not to pass on this content, so as not to distort the sincerity of the ballot." The leak did not appear to have any impact on the French presidential election which continued as scheduled and ended with a Macron victory by a margin of 32%. Despite this, French security officials commenced an investigation into the hacking shortly after the election.
Shortly after the alt-right media boosted the leak, chief of Macron's campaign Mounir Mahjoubi stated that they have been watching GRU hacking attempts since February, and let them steal a carefully prepared cache of trivial and forged documents. After this was confirmed against the leaks contents, its credibility was seriously undermined.
In the United States, U.S. Senator from Virginia and ranking member of the Senate Intelligence Committee, Mark Warner said the hacking and subsequent leak only emboldened his committee's investigation, and former Secretary of State and Democratic presidential candidate Hillary Clinton said in a tweet, "Victory for Macron, for France, the EU, & the world. Defeat to those interfering w/democracy. (But the media says I can't talk about that)."
Perpetrators
An assessment by Flashpoint, an American cybersecurity firm, stated that they determined with "moderate confidence" that the group behind the hacking and leak was APT28, better known as 'Fancy Bear', a hacking group with ties to Russian military intelligence. Metadata pulled from the dump revealed the name 'Georgy Petrovich Roshka', likely an alias, which has ties to a Moscow-based intelligence contractor. Many similarities, including the use of social media bots in an attempt to scrub metadata, also pointed to Fancy Bear. However, on 1 June 2017, Guillaume Poupard, the head of France's premier cybersecurity agency said in an interview with the Associated Press the hack, "was so generic and simple that it could have been practically anyone". On 9 May, two days after the election, Mike Rogers, head of the NSA, said in sworn testimony with the United States Senate he had been made aware of Russian attempts to hack French election infrastructure, though he did not mention anything related to the identities of those behind the Macron email hacking. This followed a French announcement that electronic voting for France's overseas citizens would be discontinued in light of cybersecurity threats.
According to the Le Monde newspaper and with the work based on non-public rapport of Google and FireEye, the GRU is responsible.
Vladimir Putin has denied claims of election interference, claiming Russia itself has also been a target of meddling.
References
2017 in France
Email leaks
2017 French presidential election
May 2017 events in France
Foreign electoral intervention
Information published by WikiLeaks
Email hacking
News leaks
Hacking in the 2010s
Alt-right |
9870224 | https://en.wikipedia.org/wiki/Vision%20%28comics%29 | Vision (comics) | Vision is the name of three fictional characters from Marvel Comics. The original character originated in Marvel's predecessor Timely Comics and is depicted as an extra-dimensional law enforcement officer; the latter two are humanoid androids. The original first appeared in Marvel Mystery Comics #13 in 1940.
Vision (Aarkus)
The original Vision first appeared in Marvel Mystery Comics #13 by Timely Comics by Joe Simon and Jack Kirby.
Vision (Victor Shade)
A character loosely based on the original, the Vision was created in 1968 by Roy Thomas, Stan Lee and John Buscema. He's the best-known version, having made several appearances in the Marvel Cinematic Universe movies and TV shows beginning in the 2010s.
Vision (Jonas)
The Vision (Jonas) is a fictional superhero appearing in American comic books published by Marvel Comics. The character first appears in Young Avengers #5, and is the third character and second android character published by Marvel with the superhero name Vision. He is a combination of the original android Vision's program files and the armor and brain patterns of Iron Lad.
Publication history
Vision first appeared in Young Avengers #5 (August 2005) and was created by Allan Heinberg and Jim Cheung.
The exact details of the character's personality and mental make-up vary from writer to writer. While some writers, such as Heinberg and Dan Slott, write him as an entirely new character, other writers like Brian Michael Bendis (during the "Collective" storyline) and Ed Brubaker (in Captain America: Reborn) write him as if he is the original Vision in a new body (or at least has access to the original Vision's memories).
Fictional character biography
The Vision is a fusion of the old Vision's operating systems and the armor of adventurer Iron Lad, a teenage version of Kang the Conqueror who arrives in the present. Through this merger, Iron Lad is able to access plans the Vision had created in the event of the Avengers' defeat. He uses these plans to assemble a new team of "Young Avengers". When Iron Lad is forced to remove his armor to stop Kang the Conqueror from tracking him, the Vision's operating system causes the armor to become a sentient being. When Iron Lad leaves the time period, he leaves the armor behind with the Vision's operating system activated.
The new Vision opts to stay with the Young Avengers and serve as a mentor for them, though it is later revealed that (due to having Iron Lad's brainwave patterns as the basis for his personality) he is with the group due to his growing feelings of affection towards Cassie Lang, the superhero known as Stature. After the events of the "Civil War" storyline, the Vision travels the world posing as different people in order to gain a better understanding of who he is. He then finds Cassie and declares his love, and states he has adopted the name "Jonas". During a later battle between the alien Skrulls and the Avengers, the Vision is shot through the head. He survives and joins with Nick Fury and S.H.I.E.L.D. alongside the other Young Avengers.
He joins the new lineup of the Mighty Avengers, along with Stature. They opt to keep their dual memberships in the Avengers and the Young Avengers a secret, in order to hunt for the Scarlet Witch (really Loki in disguise), who arranged for the roster to form. They ultimately tell their teammates this when Loki reveals his impersonation of Wanda and confront him. When Steve Rogers was sent travelling back and forth across his timeline, he is able to pass on a message to the Avengers in the present by briefly isolating himself with the Vision during the Kree-Skrull War and asking him to pass on a time-delayed message, which Jonas was able to access and share with the other Avengers. When the Mighty Avengers ultimately disbands following the events of the "Siege", Jonas and Cassie rejoin the Young Avengers full-time.
In Avengers: Children's Crusade, Cassie is killed by Doctor Doom, and Iron Lad decides to take her body into the future to be revived. Jonas protests, reasoning that such an action is more in line with Kang's manipulation of time than what Cassie would want, and Iron Lad murders him in a fit of jealous anger. Although his teammates contemplate rebuilding him, they decide against it, both because they lack the 30th-century technology to do so and because, even with their access to his back-ups, the lack of a back-up immediately prior to his death would mean that they would have to tell him about Cassie's death all over again. Kate, Cassie's best friend, prefers to believe that he and Cassie are somehow together wherever they are now.
Powers and abilities
Vision is able to use Iron Lad's neuro-kinetic armor to recreate the former Vision's abilities, including superhuman strength, density manipulation, and flight. The yellow solar cell on his forehead can also emit a beam of infrared and microwave radiation. He is also capable of energy and holographic manipulation, shapeshifting and time travel.
References
Marvel Comics code names |
46989187 | https://en.wikipedia.org/wiki/Iopas | Iopas | In Virgil's Aeneid, Iopas is a bard at the court of Dido. He appears at the end of Book 1, where he sings the so-called "Song of Iopas", a creation narrative, at the banquet given for Aeneas and his Trojans.
Text, context
The passage in Virgil:
...cithara crinitus Iopas
personat aurata, docuit quem maximus Atlas.
hic canit errantem lunam solisque labores,
unde hominum genus et pecudes, unde imber et ignes,
Arcturum pluuiasque Hyadas geminosque Triones,
quid tantum Oceano properent se tingere soles
hiberni, uel quae tardis mora noctibus obstet
A student of Atlas, the maestro,
Livens the air with his gilded harp. For the long-haired Iopas
Sings of the unpredictable moon, of the sun and its labours,
Origins human and animal, causes of fire and of moisture,
Stars (Lesser, Greater Bear, rainy Hyades, also Arcturus),
Why in the winter the sun so hurries to dive in the Ocean,
What slows winter's lingering nights, what blocks and delays them. (Tr. Frederick Ahl)
As Christine G. Perkell points out, Iopas's song consists of "commonplaces of the didactic genre" rather than heroic song, which is the kind of song one could have expected from a court poet like Phemius or Demodocus from the Odyssey. Iopas's song resembles Lucretius's De Rerum Natura, Hesiod's Works and Days, and Virgil's own Georgics.
Interpretation
Many interpretations have been offered for Iopas's song. Classicist Eve Adler, who paid particular attention to how the Trojans at the banquet wait with applauding the song until the Carthaginians have expressed their appreciation, notes that Iopas's naturalistic explanation of the world (requiring no gods) comes as a surprise to the Trojans; Adler sees the passage as anticipated in Virgil's Georgics, at the end of Book 2 and the beginning of Book 3. For Adler, Iopas is a kind of Lucretius-figure (whose message Virgil rejects). Classicist Timothy Power considers that Iopas evokes King Juba II of Numidia, famous augustal scholar.
References
Characters in the Aeneid |
3827279 | https://en.wikipedia.org/wiki/Software%20pipelining | Software pipelining | In computer science, software pipelining is a technique used to optimize loops, in a manner that parallels hardware pipelining. Software pipelining is a type of out-of-order execution, except that the reordering is done by a compiler (or in the case of hand written assembly code, by the programmer) instead of the processor. Some computer architectures have explicit support for software pipelining, notably Intel's IA-64 architecture.
It is important to distinguish software pipelining, which is a target code technique for overlapping loop iterations, from modulo scheduling, the currently most effective known compiler technique for generating software pipelined loops.
Software pipelining has been known to assembly language programmers of machines with instruction-level parallelism since such architectures existed. Effective compiler generation of such code dates to the invention of modulo scheduling by Rau and Glaeser.
Lam showed that special hardware is unnecessary for effective modulo scheduling. Her technique, modulo variable expansion is widely used in practice.
Gao et al. formulated optimal software pipelining in integer linear programming, culminating in validation of advanced heuristics in an evaluation paper. This paper has a
good set of references on the topic.
Example
Consider the following loop:
for i = 1 to bignumber
A(i)
B(i)
C(i)
end
In this example, let A(i), B(i), C(i) be instructions, each operating on data i, that are dependent on each other. In other words, A(i) must complete before B(i) can start. For example, A could load data from memory into a register, B could perform some arithmetic operation on the data, and C could store the data back into memory. However, let there be no dependence between operations for different values of i. In other words, A(2) can begin before A(1) finishes.
Without software pipelining, the operations execute in the following sequence:
A(1) B(1) C(1) A(2) B(2) C(2) A(3) B(3) C(3) ...
Assume that each instruction takes 3 clock cycles to complete (ignore for the moment the cost of the looping control flow). Also assume (as is the case on most modern systems) that an instruction can be dispatched every cycle, as long as it has no dependencies on an instruction that is already executing. In the unpipelined case, each iteration thus takes 9 cycles to complete: 3 clock cycles for A(1), 3 clock cycles for B(1), and 3 clock cycles for C(1).
Now consider the following sequence of instructions with software pipelining:
A(1) A(2) A(3) B(1) B(2) B(3) C(1) C(2) C(3) ...
It can be easily verified that an instruction can be dispatched each cycle, which means that the same 3 iterations can be executed in a total of 9 cycles, giving an average of 3 cycles per iteration.
Implementation
Software pipelining is often used in combination with loop unrolling, and this combination of techniques is often a far better optimization than loop unrolling alone. In the example above, we could write the code as follows (assume for the moment that bignumber is divisible by 3):
for i = 1 to (bignumber - 2) step 3
A(i)
A(i+1)
A(i+2)
B(i)
B(i+1)
B(i+2)
C(i)
C(i+1)
C(i+2)
end
Of course, matters are complicated if (as is usually the case) we can't guarantee that the total number of iterations will be divisible by the number of iterations we unroll. See the article on loop unrolling for more on solutions to this problem, but note that software pipelining prevents the use of Duff's device.
In the general case, loop unrolling may not be the best way to implement software pipelining. Consider a loop containing instructions with a high latency. For example, the following code:
for i = 1 to bignumber
A(i) ; 3 cycle latency
B(i) ; 3
C(i) ; 12(perhaps a floating point operation)
D(i) ; 3
E(i) ; 3
F(i) ; 3
end
would require 12 iterations of the loop to be unrolled to avoid the bottleneck of instruction C. This means that the code of the loop would increase by a factor of 12 (which not only affects memory usage, but can also affect cache performance, see code bloat). Even worse, the prologue (code before the loop for handling the case of bignumber not divisible by 12) will likely be even larger than the code for the loop, and very probably inefficient because software pipelining cannot be used in this code (at least not without a significant amount of further code bloat). Furthermore, if bignumber is expected to be moderate in size compared to the number of iterations unrolled (say 10-20), then the execution will spend most of its time in this inefficient prologue code, rendering the software pipelining optimization ineffectual.
By contrast, here is the software pipelining for our example (the prologue and epilogue will be explained later):
prologue
for i = 1 to (bignumber - 6)
A(i+6)
B(i+5)
C(i+4)
D(i+2) ; note that we skip i+3
E(i+1)
F(i)
end
epilogue
Before getting to the prologue and epilogue, which handle iterations at the beginning and end of the loop, let's verify that this code does the same thing as the original for iterations in the middle of the loop. Specifically, consider iteration 7 in the original loop. The first iteration of the pipelined loop will be the first iteration that includes an instruction from iteration 7 of the original loop. The sequence of instructions is:
Iteration 1: A(7) B(6) C(5) D(3) E(2) F(1)
Iteration 2: A(8) B(7) C(6) D(4) E(3) F(2)
Iteration 3: A(9) B(8) C(7) D(5) E(4) F(3)
Iteration 4: A(10) B(9) C(8) D(6) E(5) F(4)
Iteration 5: A(11) B(10) C(9) D(7) E(6) F(5)
Iteration 6: A(12) B(11) C(10) D(8) E(7) F(6)
Iteration 7: A(13) B(12) C(11) D(9) E(8) F(7)
However, unlike the original loop, the pipelined version avoids the bottleneck at instruction C. Note that there are 12 instructions between C(7) and the dependent instruction D(7), which means that the latency cycles of instruction C(7) are used for other instructions instead of being wasted.
The prologue and epilogue handle iterations at the beginning and end of the loop. Here is a possible prologue for our example above:
; loop prologue (arranged on lines for clarity)
A(1)
A(2), B(1)
A(3), B(2), C(1)
A(4), B(3), C(2) ; cannot start D(1) yet
A(5), B(4), C(3), D(1)
A(6), B(5), C(4), D(2), E(1)
Each line above corresponds to an iteration of the main pipelined loop, but without the instructions for iterations that have not yet begun. Similarly, the epilogue progressively removes instructions for iterations that have completed:
; loop epilogue (arranged on lines for clarity)
B(bignumber), C(bignumber-1), D(bignumber-3), E(bignumber-4), F(bignumber-5)
C(bignumber), D(bignumber-2), E(bignumber-3), F(bignumber-4)
D(bignumber-1), E(bignumber-2), F(bignumber-3)
D(bignumber), E(bignumber-1), F(bignumber-2)
E(bignumber), F(bignumber-1)
F(bignumber)
Difficulties of implementation
The requirement of a prologue and epilogue is one of the major difficulties of implementing software pipelining. Note that the prologue in this example is 18 instructions, 3 times as large as the loop itself. The epilogue would also be 18 instructions. In other words, the prologue and epilogue together are 6 times as large as the loop itself. While still better than attempting loop unrolling for this example, software pipelining requires a trade-off between speed and memory usage. Keep in mind, also, that if the code bloat is too large, it will affect speed anyway via a decrease in cache performance.
A further difficulty is that on many architectures, most instructions use a register as an argument, and that the specific register to use must be hard-coded into the instruction. In other words, on many architectures, it is impossible to code such an instruction as "multiply the contents of register X and register Y and put the result in register Z", where X, Y, and Z are numbers taken from other registers or memory. This has often been cited as a reason that software pipelining cannot be effectively implemented on conventional architectures.
In fact, Monica Lam presents an elegant solution to this problem in her thesis, A Systolic Array Optimizing Compiler (1989) (). She calls it modulo variable expansion. The trick is to replicate the body of the loop after it has been scheduled, allowing different registers to be used for different values of the same variable when they have to be live at the same time. For the simplest possible example, let's suppose that A(i) and B(i) can be issued in parallel and that the latency of the former is 2 cycles. The pipelined body could then be:
A(i+2); B(i)
Register allocation of this loop body runs into the problem that the result of A(i+2) must stay live for two iterations. Using the same register for the result of A(i+2) and the input of B(i) will result in incorrect results.
However, if we replicate the scheduled loop body, the problem is solved:
A(i+2); B(i)
A(i+3); B(i+1)
Now a separate register can be allocated to the results of A(i+2) and A(i+3). To be more concrete:
r1 = A(i+2); B(i) = r1
r2 = A(i+3); B(i+1) = r2
i = i + 2 // Just to be clear
On the assumption that each instruction bundle reads its input registers before writing its output registers, this code is correct. At the start of the replicated loop body, r1 holds the value of A(i+2) from the previous replicated loop iteration. Since i has been incremented by 2 in the meantime, this is actually the value of A(i) in this replicated loop iteration.
Of course, code replication increases code size and cache pressure just as the prologue and epilogue do. Nevertheless, for loops with large trip counts on architectures with enough instruction level parallelism, the technique easily performs well enough to be worth any increase in code size.
IA-64 implementation
Intel's IA-64 architecture provides an example of an architecture designed with the difficulties of software pipelining in mind. Some of the architectural support for software pipelining includes:
A "rotating" register bank; instructions can refer to a register number that is redirected to a different register each iteration of the loop (eventually looping back around to the beginning). This makes the extra instructions inserted in the previous example unnecessary.
Predicates (used to "predicate" instructions; see Branch predication) that take their value from special looping instructions. These predicates turn on or off certain instructions in the loop, making a separate prologue and epilogue unnecessary.
References
Compiler optimizations |
62316467 | https://en.wikipedia.org/wiki/Andrea%20Danyluk | Andrea Danyluk | Andrea Pohoreckyj Danyluk is an American computer scientist and computer science educator. She is Mary A. and William Wirt Warren Professor of Computer Science at Williams College, and co-chair of the Committee on Widening Participation in Computing Research of the Computing Research Association.
Education
Danyluk earned a bachelor's degree in mathematics and computer science from Vassar College in 1984. She completed her Ph.D. in computer science at Columbia University in 1989. Her dissertation, Extraction and Use of Contextual Attributes for Theory Completion: An Integration of Explanation-Based and Similarity-Based Learning, concerned machine learning and was supervised by Kathleen McKeown.
Career
After working in industry for several years, Danyluk joined Williams College as an assistant professor in 1993. At Williams, she chaired the computer science department from 2005 to 2008, and the cognitive science program from 2005 to 2006. She was acting dean of the faculty from 2009 to 2010. She was Dennis A. Meenan '54 Third Century Professor of Computer Science at Williams College from 2012 to 2018, and was given the Mary A. and William Wirt Warren Professorship in 2018.
She has also worked at Northeastern University as a visiting director and founding director of a master's program aimed at computer science students who studied other subjects as undergraduates. She remains associated with Northeastern as a member of the advisory council of the Center for Inclusive Computing.
Education
Danyluk is a proponent of event-driven programming in lower-level computer science education.
With Kim Bruce and Thomas Murtagh, she is the author of a textbook that follows this view, Java: An Eventful Approach (Prentice Hall, 2006).
References
External links
Home page
Year of birth missing (living people)
Living people
American computer scientists
American women computer scientists
Vassar College alumni
Columbia University alumni
Williams College faculty
American women academics
21st-century American women |
19992532 | https://en.wikipedia.org/wiki/Early%20history%20of%20video%20games | Early history of video games | The history of video games spans a period of time between the invention of the first electronic games and today, covering many inventions and developments. Video gaming reached mainstream popularity in the 1970s and 1980s, when arcade video games, gaming consoles and home computer games were introduced to the general public. Since then, video gaming has become a popular form of entertainment and a part of modern culture in most parts of the world. The early history of video games, therefore, covers the period of time between the first interactive electronic game with an electronic display in 1947, the first true video games in the early 1950s, and the rise of early arcade video games in the 1970s (Pong and the beginning of the first generation of video game consoles with the Magnavox Odyssey, both in 1972). During this time there was a wide range of devices and inventions corresponding with large advances in computing technology, and the actual first video game is dependent on the definition of "video game" used.
Following the 1947 invention of the cathode-ray tube amusement device—the earliest known interactive electronic game as well as the first to use an electronic display—the first true video games were created in the early 1950s. Initially created as technology demonstrations, such as the Bertie the Brain and Nimrod computers in 1950 and 1951, video games also became the purview of academic research. A series of games, generally simulating real-world board games, were created at various research institutions to explore programming, human–computer interaction, and computer algorithms. These include OXO and Christopher Strachey's draughts program in 1952, the first software-based games to incorporate a CRT display, and several chess and checkers programs. Possibly the first video game created simply for entertainment was 1958's Tennis for Two, featuring moving graphics on an oscilloscope. As computing technology improved over time, computers became smaller and faster, and the ability to work on them was opened up to university employees and undergraduate students by the end of the 1950s. These new programmers began to create games for non-academic purposes, leading up to the 1962 release of Spacewar! as one of the earliest known digital computer games to be available outside a single research institute.
Throughout the rest of the 1960s increasing numbers of programmers wrote digital computer games, which were sometimes sold commercially in catalogs. As the audience for video games expanded to more than a few dozen research institutions with the falling cost of computers, and programming languages that would run on multiple types of computers were created, a wider variety of games began to be developed. Video games transitioned into a new era in the early 1970s with the launch of the commercial video game industry in 1971 with the display of the coin-operated arcade game Galaxy Game and the release of the first arcade video game Computer Space, and then in 1972 with the release of the immensely successful arcade game Pong and the first home video game console, the Magnavox Odyssey, which launched the first generation of video-game consoles.
Defining the video game
The term "video game" has evolved over the decades from a purely technical definition to a general concept defining a new class of interactive entertainment. Technically, for a product to be a video game under early definitions, it needed to transmit a video signal to a display.
This can (but not always) include a cathode ray tube (CRT), oscilloscope, liquid crystal display, vector-scan monitor, etc. This definition would preclude early computer games that outputted results to a printer or teletype rather than a display, as well as games that used static LCD graphics, for example Nintendo's Game & Watch, or most Tiger Electronics handhelds. From a technical standpoint, these would more properly be called "electronic games" or "computer games".
Today the term "video game" has completely shed its purely hardware-dependent definition and encompasses a wider range of technology. While still rather ill-defined, the term "video game" now generally encompasses any game played on hardware built with electronic logic circuits that incorporates an element of interactivity and outputs the results of the player's actions to a display. Going by this broader definition, the first video games appeared in the early 1950s; they were tied largely to research projects at universities and large corporations, though, and had little influence on each other due to their primary purpose as academic and promotional devices rather than entertainment games.
The ancestors to these games include the cathode-ray tube amusement device, the earliest known interactive electronic game as well as the first to incorporate a cathode-ray tube screen. The player simulates an artillery shell trajectory on a CRT screen connected to an oscilloscope, with a set of knobs and switches. The device uses purely analog electronics and does not use any digital computer or memory device or execute a program. It was patented by Thomas T. Goldsmith Jr. and Estle Ray Mann in 1947. While the idea behind the game was potentially to use a television set as the display and thus sell the invention to consumers, as Goldsmith and Mann worked at television designer DuMont Laboratories, the patent, the first for an electronic game, was never used and the device never manufactured beyond the original handmade prototypes.
This, along with the lack of electronic logic circuits, keeps the device from being considered the first video game. In 1948, shortly after the patenting of this device, Alan Turing and David Champernowne developed the earliest known written computer game - a chess simulation called Turochamp, though it was never actually implemented on a computer as the code was too complicated to run on the machines of the time. Turing tested the code in a game in 1952 where he mimicked the operation of the code in a real chess-game against an opponent, but was never able to run the program on a computer.
Initial games
The first electronic digital computers, Colossus and ENIAC, were built during World War II to aid the Allied war effort. Shortly after the war, the promulgation of the first stored program architectures at the University of Manchester (Manchester Mark 1), University of Cambridge (EDSAC), the University of Pennsylvania (EDVAC), and Princeton University (IAS machine) allowed computers to be easily reprogrammed to undertake a variety of tasks, which facilitated commercializing computers in the early 1950s by companies like Remington Rand, Ferranti, and IBM. This in turn promoted the adoption of computers by universities, government organizations, and large corporations as the decade progressed. It was in this environment that the first video games were born.
The computer games of the 1950s can generally be divided into three categories: training and instructional programs, research programs in fields such as artificial intelligence, and demonstration programs intended to impress or entertain the public. Because these games were largely developed on unique hardware in a time when porting between systems was difficult and were often dismantled or discarded after serving their limited purposes, they did not generally influence further developments in the industry. For the same reason, it is impossible to be certain who developed the first computer game or who originally modeled many of the games or play mechanics introduced during the decade, as there are likely several games from this period that were never publicized and are thus unknown today.
The earliest known publicly demonstrated electronic game was created in 1950. Bertie the Brain was an arcade game of tic-tac-toe, built by Josef Kates for the 1950 Canadian National Exhibition. To showcase his new miniature vacuum tube, the additron tube, he designed a specialized computer to use it, which he built with the assistance of engineers from Rogers Majestic. The large metal computer, which was four meters tall, could only play tic-tac-toe on a lightbulb-backed display, and was installed in the Engineering Building at the Canadian National Exhibition from August 25 to September 9, 1950. The game was a success at the two-week exhibition, with attendees lining up to play it as Kates adjusted the difficulty up and down for players. After the exhibition, Bertie was dismantled, and "largely forgotten" as a novelty. Kates has said that he was working on so many projects at the same time that he had no energy to spare for preserving it, despite its significance.
Nearly a year later on May 5, 1951, the Nimrod computer—created by engineering firm and nascent computer developer Ferranti—was presented at the Festival of Britain, and then showcased for three weeks in October at the Berlin Industrial Show before being dismantled. Using a panel of lights for its display, it was designed exclusively to play the game of Nim; moves were made by players pressing buttons which corresponded with the lights. Nimrod could play either the traditional or "reverse" form of the game. The machine was twelve feet wide, nine feet deep, and five feet tall. It was based on an earlier Nim-playing machine, "Nimatron", designed by Edward Condon and built by Westinghouse Electric in 1940 for display at the New York World's Fair. "Nimatron" had been constructed from electromechanical relays and weighed over a ton. The Nimrod was primarily intended to showcase Ferranti's computer design and programming skills rather than entertain, and was not followed up by any future games. Despite this, most of the onlookers at the Festival of Britain were more interested in playing the game than in the programming and engineering logic behind it.
Around this time, non-visual games were being developed at various research computer laboratories; for example, Christopher Strachey developed a simulation of the game draughts, or checkers, for the Pilot ACE that he unsuccessfully attempted to run for the first time in July 1951 at the British National Physical Laboratory and completed in 1952; this is the first known computer game to be created for a general-purpose computer, rather than a machine specifically made for the game like Bertie. Strachey's program inspired Arthur Samuel to develop his own checkers game in 1952 for the IBM 701; successive iterations developed rudimentary artificial intelligence by 1955 and a version was shown on television in 1956. Also in 1951, Dietrich Prinz wrote the first limited program of chess for the University of Manchester's general-purpose Ferranti Mark 1 computer, one of the first commercially available computers. The program was only capable of computing "mate-in-two" problems as it was not powerful enough to play a full game, and it had no video output. Around the same time in the early 1950s, military research organizations like the RAND Corporation developed a series of combat simulation games of increasing complexity, such as Carmonette, where the player would enter orders to intercept enemy aircraft, or set up their forces to counter an enemy army invasion. These simulations were not yet true video games, as they required human intervention to interpret the player's orders and the final results; the computer only controlled the paths that the enemies would take, and the program was focused on simulating events and probabilities.
Interactive visual games
In 1952, Alexander S. Douglas created OXO, a software program for the EDSAC computer, which simulates a game of tic-tac-toe. The EDSAC was one of the first stored-program computers, with memory that could be read from or written to, and filled an entire room; it included three 35×16 dot matrix cathode ray tubes to graphically display the state of the computer's memory. As a part of a thesis on human–computer interaction, Douglas used one of these screens to portray other information to the user; he chose to do so via displaying the current state of a game. The player entered input using a rotary telephone controller, selecting which of the nine squares on the board they wished to move next. Their move would appear on the screen, and then the computer's move would follow. The game was not available to the general public, and was only available to be played in the University of Cambridge's Mathematical Laboratory, by special permission, as the EDSAC could not be moved. Like other early video games, after serving Douglas's purpose, the game was discarded. Around the same time, Strachey expanded his draughts program for another mainframe computer, the Manchester Mark 1, culminating in a version for the Ferranti Mark 1 in 1952, which had a CRT display. Like OXO, the display was mostly static, updating only when a move was made. OXO and Strachey's draughts program are the earliest known games to display visuals on an electronic screen.
The first known game incorporating graphics that updated in real time, rather than only when the player made a move, was a simulation of a bouncing ball created by Massachusetts Institute of Technology (MIT) student Oliver Aberth for the Whirlwind I computer. He initially created the simulation in February 1951, which allowed users to adjust the frequency of the bounces with a knob, and sometime between late 1951 and 1953 made it into a game by adding a hole in the floor for players to aim for. The game was used in classes at MIT by Charles W. Adams, assistant professor of digital computers. It was followed by a pool game programmed by William Brown and Ted Lewis specifically for a demonstration of the University of Michigan-developed MIDSAC computer at the university in 1954. The game, developed over six months by the pair, featured a pool stick controlled by a joystick and a knob, and a full rack of 15 balls on a table seen in an overhead view. The computer calculated the movements of the balls as they collided and moved around the table, disappearing when they reached a pocket, and updated the graphics continuously, forty times a second, so as to show real-time motion. Like previous video games, the pool game was intended primarily to showcase the computing power of the MIDSAC computer.
While further games like checkers and chess were developed on research computers, the next milestone in video games came in 1958 with Tennis for Two. Perhaps the first game created solely for entertainment rather than as a technology demonstration or a research tool, the program simulated a game of tennis. Created by American physicist William Higinbotham for visitors at the Brookhaven National Laboratory to be more entertaining for visitors on their public day than the usual static exhibits about nuclear power, the game ran on a Donner Model 30 analog computer and displayed a side view of a tennis court on an oscilloscope. The players controlled the angle of their shots with attached controllers, and the game calculated and simulated the trajectory of the ball, including the possibility of hitting the net. The game was first shown on October 18, 1958. Hundreds of visitors lined up to play the new game during its debut. Due to the game's popularity, an upgraded version was shown the following year, with enhancements including a larger screen and different levels of simulated gravity. Afterwards, having served its purpose, the game was dismantled for its component parts. While the game had no innovations in game design or technological development, its status as an entertainment-focused game, rather than an academic project or technological showpiece, has led it to be considered one of the first "real" video games as they are generally thought of today.
Over the next few years, during 1957–61, various computer games continued to be created in the context of academic computer and programming research, particularly as computer technology improved to include smaller, transistor-based computers on which programs could be created and run in real time, rather than operations run in batches. A few programs, however, while used to showcase the power of the computer they ran on were also intended as entertainment products; these were generally created by undergraduate students, such as at MIT where they were allowed on occasion to develop programs for the TX-0 experimental computer. These interactive graphical games were created by a community of programmers, many of them students affiliated with the Tech Model Railroad Club (TMRC) led by Alan Kotok, Peter Samson, and Bob Saunders. The games included Tic-Tac-Toe, which used a light pen to play a simple game of noughts and crosses against the computer, and Mouse in the Maze. Mouse in the Maze allowed users to use a light pen to set up a maze of walls on the monitor, and spots that represented bits of cheese or glasses of martini. A virtual mouse was then released and would traverse the maze to find the objects. Additionally, the wargame simulations from the early 1950s by the RAND Corporation had expanded into more complicated simulations which required little human intervention, and had also sparked the creation of business management simulation games such as The Management Game, which was used in business schools such as at Carnegie Mellon University by 1958. By 1961, there were over 89 different business simulation games in use, with various graphical capabilities. As the decade ended, despite several video games having been developed, there was no such thing as a commercial video game industry; almost all games had been developed on or as a single machine for specific purposes, and the few simulation games were neither commercial nor for entertainment.
The spread of games
By 1961, MIT had acquired the DEC PDP-1 minicomputer, the successor to the TX-0, which also used a vector display system. The system's comparatively small size and processing speed meant that, like with the TX-0, the university allowed its undergraduate students and employees to write programs for the computer which were not directly academically related whenever it was not in use. In 1961–62, Harvard and MIT employees Martin Graetz, Steve Russell, and Wayne Wiitanen created the game Spacewar! on the PDP-1, inspired by science fiction books such as the Lensman series. The game was copied to several of the early minicomputer installations in American academic institutions, making it potentially the first video game to be available outside a single research institute.
The two-player game has the players engaged in a dogfight between two spaceships set against the backdrop of a randomly generated background starfield. The game was developed to meet three precepts: to use as much of the computer's resources as possible, to be consistently interesting and therefore have every run be different, and to be entertaining and therefore a game. The game was a multiplayer game because the computer had no resources left over to handle controlling the other ship. After the game's initial development, members of the TMRC worked to improve the game, adding an accurate starfield and a gravitational body, and spread it to the couple dozen other institutions with a PDP-1, a process which continued over the next few years. As the computer was uncomfortable to use for extended periods of time, Kotok and Saunders created a detached control device, essentially an early gamepad. Spacewar was reportedly used as a smoke test by DEC technicians on new PDP-1 systems before shipping, since it was the only available program that exercised every aspect of the hardware. Although the game was widespread for the era, it was still very limited in its direct reach: the PDP-1 was priced at US$120,000 () and only 55 were ever sold, many without a monitor, which prohibited Spacewar or any game of the time from reaching beyond a narrow, academic audience. Russell has been quoted as saying that the aspect of the game that he was most pleased with was the number of other programmers it inspired to write their own games.
Although the market for commercial games—and software in general—was small, due to the cost of computers limiting their spread to research institutions and large corporations, several were still created by programmers and distributed by the computer manufacturers. A number of games could be found in an April 1962 IBM program catalog. These included board games, "BBC Vik The Baseball Demonstrator", and "Three Dimensional Tic-Tack-Toe". Following the spread of Spacewar, further computer games developed by programmers at universities were also developed and distributed over the next few years. These included the Socratic System, a question and answer game designed to teach medical students how to diagnose patients by Wallace Feurzeig in 1962, and a dice game by Edward Steinberger in 1965. Mainframe games were developed outside of the IBM and DEC communities as well, such as the 1962 Polish Marienbad for the Odra 1003. A joint research project between IBM and the Board of Cooperative Educational Services of Westchester County, New York led to the creation of The Sumerian Game, one of the first strategy video games ever made, the first game with a narrative, and the first edutainment game; it was also the first known game to be designed by a woman, teacher Mabel Addis.
The creation of general programming languages like BASIC, which could be run on different hardware types, allowed for programs to be written for more than one specific computer, in turn letting games written in them to spread to more end players in the programming community than before. These games included a baseball simulation game written in BASIC by John Kemeny in 1965; a BASIC bingo game by Larry Bethurum in 1966; a basketball simulation game written in BASIC by Charles R. Bacheller in May 1967; another baseball game that simulates the 1967 World Series written in BASIC by Jacob Bergmann in August 1967; Space Travel, written by Ken Thompson for a Multics system in 1969 and which led in part to the development of the Unix operating system; and Hamurabi, a text-based FOCAL game written by Doug Dyment in 1968 based on a description of The Sumerian Game and converted to BASIC by David H. Ahl in 1969. Hamurabi and Space Travel were among several early mainframe games that were written during the time, and spread beyond their initial mainframe computers to general-purpose languages like BASIC.
A new industry
At the beginning of the 1970s, video games existed almost entirely as novelties passed around by programmers and technicians with access to computers, primarily at research institutions and large companies. The history of video games transitioned into a new era early in the decade, however, with the rise of the commercial video game industry.
The arcade video game industry grew out of the pre-existing arcade game industry, which was previously dominated by electro-mechanical games (EM games). Following the arrival of Sega's EM game Periscope (1966), the arcade industry was experiencing a "technological renaissance" driven by "audio-visual" EM novelty games, establishing the arcades as a healthy environment for the introduction of commercial video games in the early 1970s. The first commercial arcade video game was Computer Space (1971), which was developed by Nolan Bushnell and Ted Dabney and was based on Spacewar. Bushnell, who had previously worked at an arcade, wanted to recreate Spacewar as an arcade game. They had found the Data General Nova, a US$4,000 computer that they thought would be powerful enough to run four games of Spacewar at once; the computer turned out to not actually be powerful enough for the project. While investigating the concept of replacing some of the computer with purpose-built hardware, however, the pair discovered that making a system explicitly for running such a game, rather than general programs, would be much less expensive: as low as $100. A prototype version had been successfully displayed for a short time in August 1971 in a local bar, the design was nearly finished, and the pair had founded a company around it called Syzygy. Bushnell had also found a manufacturer for the game, Nutting Associates, who would make the final game cabinets and sell them to distributors.
Another early coin-operated arcade video game was Galaxy Game (1971), developed by Bill Pitts and Hugh Tuck at Stanford University using a DEC PDP-11 computer with vector displays. The pair was also inspired to make the game by Spacewar; Tuck had remarked in 1966 while playing the game that a coin-operated version of the game would be very successful. Such a device was unfeasible in 1966 due to the cost of computers, but in 1969 DEC released the PDP-11 for US$20,000 (); while this was still too high for a commercially viable product, as most games in arcades cost around US$1,000 at the time, the pair felt it was low enough to build a prototype to determine interest and optimal per-game pricing. Only prototype units were ever built, though the second prototype was adapted to run up to eight games at once; a few months before the initial installation at Stanford in November 1971, the pair met with Nolan Bushnell, who informed them of his own game he was making for a much lower price.
Bushnell felt that Galaxy Game was not a real competitor to Computer Space, due to its high price. Pitts and Tuck believed, however, that despite the economic argument their game was superior, as they felt that Galaxy Game was a true expansion of Spacewar, while Computer Space just a pale imitation. Some players at the time, however, believed Galaxy Game to actually be just a version of Spacewar!. Galaxy Games prototype installation was very popular, though at a low price-per-game, and the pair developed a second version to display at the same location; they were never able to enter production, though, as they eventually had to abandon the idea after spending US$65,000 developing it due to the high cost and lack of business plan.
Around the same time as Galaxy Games prototype installation, Computer Space was released. It was the first coin-operated video game to be commercially sold and the first widely available video game of any kind. While it did well in its initial locations near college campuses, it performed very poorly in bars and arcades where pinball and other arcade games were typically placed; while it was commercially successful and made over US$1,000,000, it did not meet the high expectations of Nutting, who had expected to sell more than 1,500 units. Bushnell and Dabney immediately started work on another game, using the same television set design as Computer Space, as well as founding their own company Atari, Inc. to back their projects. Initially, this game was intended to be a driving video game that Bushnell planned to design, influenced by Chicago Coin's Speedway (1969). Instead, the project was given to Atari's first employee, Allan Alcorn, and as Bushnell believed the driving game would be too complicated as a first project he suggested a prototype ping-pong game. Alcorn expanded the idea, and designed a game the company immediately seized on. They were unable to find a manufacturer, but on the evidence of the success of their prototype installation, decided to produce the game cabinets themselves. Pong was released in 1972, a year after Computer Space. It was immensely commercially successful, selling over 8,000 units. It inspired copycat games to be sold in America, Europe, and Japan, and led to the popularization of the medium.
That same year saw the release of the Magnavox Odyssey, the first home video game console which could be connected to a television set. The inventor, Ralph H. Baer, had initially had the idea in 1951 to make an interactive game on a television set. Unable to do so with the technological constraints at the time, he began work on a device that would attach to a television set and display games in 1966, and the "Brown Box", the last prototype of seven, was licensed to Magnavox to adapt and produce. They announced the console in May 1972, and it went on sale that September. The console and its games featured numerous innovations beyond being the first video game device for home consumers: it was the first game to use a raster-scan video display, or television set, directly displayed via modification of a video signal; it was also the first video gaming device to be displayed in a television commercial. It sold for US$100 and shipped with several games, including "Table tennis", which Bushnell had seen a demo of and which Pong had been based on. The Odyssey sold over 100,000 units in 1972, and more than 350,000 by the end of 1975, buoyed by the popularity of the table tennis game, in turn driven by the success of Pong. Pong and the Odyssey kicked off a new era of video gaming, with numerous other competitors starting up in the video game industry as it grew in popularity.
References
Sources
External links
Research
Ralph H. Baer Papers, 1943–1953, 1966–1972, 2006 – Ralph Baer's prototypes and documentation housed at the Smithsonian Lemelson Center
Classic Gaming Expo 2000: Baer Describes the Birth of Videogames
"History of Video Games" Timeline by the Computerspielemuseum Berlin
Game simulation
EDSAC simulator to play OXO
Nimrod Interactive Simulation for Be OS operating system
Tennis for Two simulation
Spacewar! Java simulation
Video games |
1112994 | https://en.wikipedia.org/wiki/CNC%20wood%20router | CNC wood router | A CNC wood router is a CNC router tool that creates objects from wood. CNC stands for computer numerical control. The CNC works on the Cartesian coordinate system (X, Y, Z) for 3D motion control. Parts of a project can be designed in the computer with a CAD/CAM program, and then cut automatically using a router or other cutters to produce a finished part.
The CNC router is ideal for hobbies, engineering prototyping, product development, art, and production work.
Operation
A CNC wood router uses CNC (computer numerical control) and is similar to a metal CNC mill with the following differences:
The wood router typically spins faster — with a range of 13,000 to 24,000 RPM
Professional quality machines frequently use surface facing tools up to 3" in diameter or more, and spindle power from 5 to 15 horsepower. Machines capable of routing heavy material at over a thousand inches per minute are common.
Some machines use smaller toolholders MK2 (Morse taper #2 - on older machines), ISO-30, HSK-63 or the tools just get held in a collet tool holder affixed directly to the spindle nose. ISO-30 and HSK-63 are rapid-change toolholding systems. HSK-63 has begun to supplant the ISO-30 as the rapid change standard in recent years.
A wood router is controlled in the same way as a metal mill, but there are CAM and CAD applications such as Artcam, Mastercam, Bobcad, and AlphaCam, which are specifically designed for use with wood routers.
Wood routers are frequently used to machine other soft materials such as plastics.
Typical three-axis CNC wood routers are generally much bigger than their metal shop counterparts. 5' x 5', 4' x 8', and 5' x 10' are typical bed sizes for wood routers. They can be built to accommodate very large sizes up to, but not limited to 12' x 100'. The table can move, allowing for true three axis (xyz) motion, or the gantry can move, which requires the third axis to be controlled by two slaved servo motors.
Advantages
The advantages of CNC wood router (compared to general machine) as follows,
High degree of automation
Consistent quality
High productivity
Processing complex shape
Easy to implement CAD/CAM
2D, 2.5D and 3D capable
Components
Separate heads
Some wood routers have multiple separate heads that can come down simultaneously or not. Some routers have multiple heads that can run complete separate programs on separate tables all while being controlled by the same interface.
Dust collection / vacuum collector
The wood router typically has 6"-10" air ducts to suck up the wood chips and dust created. They can be piped to a stand-alone or full shop dust collection system.
Some wood routers are specialized for cabinetry and have many drills that can be programmed to come down separately or together. The drills are generally spaced 32 mm apart on centres - a spacing system called 32 mm System. This is for the proper spacing of shelving for cabinets. Drilling can be vertical or horizontal (in the Y or X axis from either side/end of the workpiece) which allows a panel to be drilled on all four edges as well as the top surface. Many of these machines with large drilling arrays are derived from CNC point-to-point borers.
Securing the workpiece
Suction systems
The wood router typically holds wood with suction through the table or pods that raise the work above the table. Pods may be used for components which require edge profiling (or undercutting), are manufactured from solid wood or where greater flexibility in production is required. This type of bed requires less extraction with greater absolute vacuum.
A second type hold down uses a spoil board. This allows vacuum suction through a low density table and allows the placement of parts anywhere on the table. These types of tables are typically used for nest-based manufacturing (NBM) where multiple components are routed from a single sheet. This type of manufacturing precludes edge drilling or undercut edge work on components.
Vacuum pumps are required with both types of tables where volume and "strength" are determined based on the types of materials being cut.
Maintenance
Proper operation and maintenance in right way can greatly extend machine's life and reduce the incidence of failures.
Avoid the direct sunlight, excessive dust and humidity
Keep away from the equipment with high vibration
Keep dust and similar away from wheels and bearings, use compressed air
External links
Woodworking machines
Wood router |
27427757 | https://en.wikipedia.org/wiki/EFront%20%28eLearning%20software%29 | EFront (eLearning software) | eFront is an eLearning platform (also known as a Course Management System (CMS), or Learning Management Systems (LMS), or Virtual Learning Environment (VLE)). eFront has historically been coming in a number of editions, from an open-source edition to the latest eFrontPro edition (which is the only available one in 2018).
eFront is designed to assist with the creation of online learning communities while offering various opportunities for collaboration and interaction through an icon-based user interface. The platform offers tools for content creation, tests building, assignments management, reporting, internal messaging, forum, chat, surveys, calendar and others. It is a SCORM 1.2 certified and SCORM 2004 / 4th edition compliant system translated in 40 languages.
eFront is commonly included in lists of well known open-source learning systems or is referred to as a Moodle alternative. Independent comparison matrices between learning management systems often favor eFront, especially under usability characteristics. Several research papers and technology portals describe the system under functionality, usability and standards perspectives.
History
Initial development of the platform began in 2001 as a research prototype funded by the Greek government, led by Dimitris Tsingos and Athanasios Papangelis. SCORM development together with a shift on AJAX technologies lead to the publishing of a stable 2.5 version during 2005. eFront was then rewritten from scratch, making essential changes to the core structure of the system and released under an open-source license in September 2007. Enterprise extensions were integrated with the platform on version 3.5. Social extensions were the most significant addition to version 3.6.
On May 9, 2016, Epignosis LLC announced the signing of a strategic partnership deal with US-based software consulting and services provider DHx Software.
Editions
Apart from the community edition that is distributed as open source software, there are three commercial editions with a modified features set, targeted at learning professionals, educational institutions and enterprises. All versions are provided with their source code but only the community edition uses an Open Source Initiative (OSI) accepted license. The commercial versions of eFront are distributed via a partners network.
Awards
In September 2012 eFront won an award from Elearning! Magazine as the best Open Source Solution. In April 2010 eFront won a coveted bronze award for technology excellence in the Learning Management Technology for Small- and Medium-sized Businesses category from Brandon-Hall Research. eFront is also listed as one of the Top LMS Software Solutions for 2012 and 2013.
Features
eFront has a number of features typically found in eLearning platforms:
User management
Lessons, courses, curriculum and categories management
Files management
Exam builders
Assignments builders
Communication tools (forum, chat, calendar, glossary)
Progress tracking
Authentication methods
Enrollment methods
Certifications
Reports generators
Extensibility via modules
Payments integration (through PayPal)
Social tools (lesson & system history, user wall, user status, Facebook interconnection)
Customizable notification system through email
Skinning via themes
It also has several features needed in an enterprise environment:
Organization structure management
Skills management
Job positions management
Automatic assignment of courses to specific job descriptions
Skills gap tests management
User card with training history
LDAP support
Specifications
eFront runs without modification on Linux, Microsoft Windows, and any other operating system that supports PHP 5.1+ and MySQL 5+. The platform is being built using the object oriented programming paradigm and its architecture is based on the 3-tier design approach separating the system's presentation from its logic and data. The platform is maintained through a community driven process. This leads to small development cycles that produce incremental improvements to the system, followed by bigger development cycles that integrate features requiring architectural changes. The development and testing procedures utilize several aspects of extreme programming.
See also
Learning management system
Online learning community
References
External links
— eFront site, community and software.
Free software programmed in PHP
Virtual learning environments
Free learning management systems |
48969921 | https://en.wikipedia.org/wiki/Luiz%20P%C3%A4etow | Luiz Päetow | Luiz Päetow (born 1979) is a Brazilian theatre director, actor and playwright.
Early life and education
Päetow started working at age 11, with several productions of the British Council Theatre Group in São Paulo, including plays by William Shakespeare, Federico Garcia Lorca, Nelson Rodrigues, and also musicals by Cole Porter with guest director Nancy Diuguid. Later, he entered the Conservatory for Dramatic Arts (located inside the School of Communications and Arts) and acted in Peter Weiss' Marat/Sade, Tennessee Williams' The Glass Menagerie, Arnold Wesker's The Kitchen, Bertolt Brecht's The Baden-Baden Lesson on Consent. As a child, he also developed cinephilia, attending international film festivals where, after seven years, he was allowed to work as an interpreter for the jury members Abbas Kiarostami, Artavazd Peleshyan, Béla Tarr and Oja Kodar. At age 19, he audited a master's degree course on Pier Paolo Pasolini.
Career
Between 1996 and 2001, Päetow became a central player for CPT (Centre for Theatre Research). During this period, he created the experimental Prêt-à-Porter. For this specific project, he directed, wrote and starred in five plays: Passengers, Under the Bridge, No Concert, Hours of Punishment and Wings of the Shadow. His documents were also published later in book form, along with essays by Renato Janine Ribeiro and Olgária Matos, among others. In 1998, he worked as assistant director to Daniela Thomas on Anton Chekhov's The Seagull, starring Fernanda Montenegro. In 1999, he worked on The Trojan Fragments which received the Theatre Shell Award and the Art Critics' Association Prize. This production had its world-premiere at the Istanbul International Theatre Festival and was also presented at the second Theatre Olympics in Shizuoka, where Päetow represented Brazil on the International Committee, with Tadashi Suzuki, Robert Wilson, Yuri Lyubimov, Nuria Espert and Theodoros Terzopoulos. At this meeting, they discussed the performing arts of the next century. In 2000, he debuted as an opera director with Henry Purcell's The Fairy-Queen.
Thanks to arrangements between CPT and CICT (International Centre for Theatre Creation), Päetow was then allowed to watch the final rehearsals for Peter Brook's Hamlet, at the Théâtre des Bouffes du Nord, with Adrian Lester. This year spent in Paris also enabled conversations with Cristiana Reali for a future collaboration on his, still unproduced, new play Washed-up Doc, and with Claire Denis and Chantal Akerman, during her retrospective at the Studio des Ursulines, aimed at further developing his Prêt-à-Porter'''s transfer from stage to screen. Before moving to Berlin, he took part in Jean Babilée's open masterclass at the Ballet de l'Opéra national de Paris. The following year, he reconnected with Sasha Waltz, with whom he had trained five years before, during her workshops at FID (São Paulo International Dance Festival).
In 2003, Päetow played the lead in the first Brazilian production of Sarah Kane's 4.48 Psychosis, which ran nonstop until April 2004. After this, he presented, at the Volksbühne, the marathon of five plays Rebellion in the Backlands, staged by Zé Celso. In 2006, he created his first solo, entitled Plays, based on the lecture written by Gertrude Stein, to whom he also devoted a three-day event examining her life and works. In the same year, he performed the title role in Georg Büchner's Leonce and Lena, directed by Gabriel Villela, nominated as best actor by the Art Critics' Association. In 2007, Päetow directed his adaptation of Clarice Lispector's novel Água Viva. Then, commissioned by the Satyrianas Festival, he wrote the play Heaven in Heat, which was presented under the pseudonym Zita Woulpe, an anagram of his name. In 2008, he starred in two productions: Cascando and Words & Music by Samuel Beckett. In 2009, he directed Music-Hall by Jean-Luc Lagarce, which he also translated and created the set/lighting designs, thus receiving the Theatre Shell Award. In 2010, he created his second solo, the endless Abracadabra, nominated for the Shell Awards.
In 2011, Päetow premiered his third solo, Ex-Machines. Back to Berlin, he developed a partnership with two musical ensembles, Klank and Trio Nexus, in order to create his play Der Hausierer, freely based on the novel The Peddler by Peter Handke. This launched his project Taeter, aimed at empowering anonymous voices and performed at undisclosed venues. The next year, he directed two dance pieces: Occurrences and Or Memory Reinvented, both recipients of the São Paulo City Hall Dance Sponsorship. In 2014, he presented a new solo, Lazarus, his adaptation of Hilda Hilst's homonymous short story. Then, he coordinated an artistic residency inside the ruins of a historic movie theater, where he presented open rehearsals for W, his next creation. In the same year, Päetow's previous plays were published in a three-volume box set. He would also start his second opera direction with Four Saints in Three Acts, libretto by Gertrude Stein. In 2015, invited by Felipe Hirsch, he took part in Puzzle, performing the poetry of Haroldo de Campos, Paulo Leminski and Gregório de Matos.
In 2019, Päetow's poem Theatre Capsule was published, with the first Brazilian translation of Gertrude Stein's Ida: A Novel. After this, he started rehearsals for Sodom Gomorrah, stylized as $ODOM\G/OMORRAH, the posthumous play by Antunes Filho. The premiere moved from April 2020 to November 2021, due to the coronavirus pandemic. In the meantime, Päetow debuted as a filmmaker with Transmission and Transition, a double feature inspired by elements of the play. The cast included Matheus Nachtergaele, Grace Passô and Christian Malheiros.
Awards and nominations
Theatre Shell - nomination for Abracadabra (2011)
Theatre Shell - award for Music-Hall (2010)
Art Critics' Association - nomination for Leonce and Lena (2007)
Brazil's Artistic Quality - nomination for 4.48 Psychosis (2004)
Theatre Shell - award for Prêt-à-Porter (1998-2008)
Art Critics' Association - award for The Trojan Fragments (2000)
Theatre Shell - award for The Trojan Fragments'' (2000)
References
External links
Luiz Päetow at Théâtre Contemporain (in French)
Luiz Päetow at Positive Den (in Portuguese)
1979 births
Living people
20th-century Brazilian male actors
21st-century Brazilian male actors
Brazilian male child actors
Brazilian male stage actors
Male actors from São Paulo |
2534233 | https://en.wikipedia.org/wiki/Mercury%20Interactive | Mercury Interactive | Mercury Interactive Corporation was an Israeli company acquired by the HP Software Division. Mercury offered software for application management, application delivery, change and configuration management, service-oriented architecture, change request, quality assurance, and IT governance.
History
In 1989, Zvi Schpizer, Ilan Kinriech and Arye Finegold founded Mercury Interactive Corporation. The company was based in California and had offices located around the world. It also had a large R&D facility in Yehud, Israel.
On 25 July 2006, Hewlett-Packard announced it would pay approximately $4.5 billion to acquire Mercury Interactive.
On 7 November 2006, Mercury Interactive formally became part of HP. The Mercury Interactive products are now sold by HP Software Division.
Mercury Interactive legacy products were integrated and sold as part of the HP IT Management Software portfolio from the HP Software Division.
Most of the Mercury Interactive software assets were apportioned to Hewlett Packard Enterprise (HPE) when HP split into two companies. In September 2017, HPE completed the sale of most of its software assets, including the legacy Mercury Interactive products to UK-based Micro Focus.
Acquisitions
From 2000 until its HP acquisition in 2006, Mercury purchased several software companies:
Conduct Software Technologies, Inc., acquired by Mercury Interactive in a share-swap deal worth about $50M, was a privately held software company founded in 1996 by Sharon Azulai, David Barzilai, and Ran Levy. The company provided network topology visualization products, to pinpoint bottlenecks and isolate the location of network problems both across the network and across the system infrastructure. Its main product was SiteRunner, which used multi-agent technology to pinpoint bottlenecks. As part of Mercury, Conduct alumni started a new project, nicknamed Falcon and later called Prism, that switched focus to monitoring web server traffic.
Freshwater Software was a software vendor of a web server monitoring and administration tool called SiteScope. Mercury Interactive acquired Freshwater Software in 2001. The product is now called HP SiteScope software.
Performant Inc. was a software vendor of J2EE diagnostic tools. Mercury Interactive acquired Performant in 2003 for $22.5M.
Kintana Inc. was a software vendor of IT governance products. Mercury Interactive acquired Kintana in June 2003 for $225M. Kintana products are now called HP Project and Portfolio Management software.
Appilog was a software vendor of auto-discovery and application mapping software. Appilog products mapped the relationships among applications and their underlying infrastructure. Mercury Interactive acquired Appilog for $49M in 2004. Appilog products are now part of HP Universal CMDB software, an HP Business Service Management offering.
BeatBox Technologies (formerly named "ClickCadence LLC") was a software vendor of real user behavior tracking products. Mercury Interactive acquired BeatBox in 2005 for approximately $14 million in cash, "to extend the real user monitoring capabilities of its BTO software and to enhance its performance lifecycle offerings.". BeatBox was incorporated into Mercury's Real User Monitor (RUM) product, which is now part of HP Business Availability Center.
Systinet (formerly named IdooX) was a software vendor of registry and enablement products for standard service-oriented architecture (SOA). Mercury Interactive acquired Systinet in 2006 for $105M. Systinet products are now called HP SOA Systinet software.
Corporate malfeasance
From 4 January 2006 until its acquisition by Hewlett-Packard, Mercury Interactive was traded via the Pink Sheets as a result of being delisted from the NASDAQ due to noncompliance with filing requirements. On 3 January 2006, Mercury missed a second deadline for restating its financials, leading to the delisting.
Chief Executive Officer Amnon Landan, Chief Financial Officer Douglas Smith, and General Counsel Susan Skaer resigned in November 2005 after a special committee at the company found that they benefited from a program to favorably price option grants. The committee found that, beginning in 1995, there were 49 instances in which the stated date of a stock option grant was different from the date on which the option appeared to have been granted. In almost every case, the price on the actual date was higher than the price on the stated grant date. A former Chief Financial Officer, Sharlene Abrams, later associated with the financial misreporting, had resigned previously in November 2001.
The Chief Executive Officer, Amnon Landan, also was found to have misreported personal stock option exercise dates to increase his profit on transactions three times between 1998 and 2001. In addition, a $1 million loan to Mr. Landan in 1999—which was repaid—did not appear to have been approved in advance by the Board of Directors and was referred to in some of the company's public filings with the Securities and Exchange Commission, but was not clearly disclosed. In 2007, the SEC filed civil fraud charges against Landan, Smith, Skaer and Abrams. Without admitting or denying the SEC's allegations, Mercury Interactive agreed to pay a $28 million civil penalty to settle the Commission's charges in 2007.
The SEC settled charges against Sharlene Abrams in March 2009. Abrams agreed to pay $2,287,914 in disgorgement, of which $1,498,822 represented the "in-the-money" benefit from her exercise of backdated option grants, and a $425,000 civil penalty. In September 2009, a federal judge dismissed all charges brought by the SEC against Susan Skaer, who now goes by the name Susan Skaer Tanner.
Products
HP ALM software: Application lifecycle management and testing toolset
HP LoadRunner software: Integrated software performance testing tools
HP QuickTest Professional software: Automated software testing
HP Quality Center software (formerly HP TestDirector for Quality Center software): Quality management software for applications
HP SiteScope software: Agentless monitoring software
HP Universal CMDB software: Configuration management database
HP Project and Portfolio Management software Project Management module: zero-client software for scheduling and managing software
HP Business Process Testing software: Automated and manual testing software for test design, test creation, test maintenance, test execution, and test data management
HP Diagnostics software: Diagnostic software for applications
HP Discovery and Dependency Mapping software: Automated application and IT infrastructure mapping software
HP Functional Testing software: Automated functional and regression testing software
HP Real User Monitor software: Software that provides real-time visibility into application performance and availability from the user perspective
HP Performance Center: Application performance testing management solutions
HP Business Availability Center: Business service management solutions
HP Mobile Center: mobile application testing solution
Competitors
Quality Assurance
IBM (acquired Rational)
Micro Focus (acquired Borland which acquired Segue - SilkTest, SilkPerformer)
Parasoft
Tricentis
IT Governance / ITIL / ITSM
BMC
CA (acquired Niku)
Compuware (acquired ChangePoint)
IBM (Tivoli)
Microsoft (System Center)
Primavera Systems
Quest Software
Monitoring and Diagnostics
BMC (acquired Coradiant)
CA (acquired Wily Technology)
Compuware
IBM
Microsoft (System Center)
References
Bibliography
External links
HP IT Management (Business Technology Optimization-BTO) Software website
The New York Times - HP to Pay $4.5 Billion to Acquire Mercury
Mercury Interactive Corporation History
Computer companies of the United States
Software companies established in 1989
Hewlett-Packard acquisitions
Software companies of Israel
Mergers and acquisitions of Israeli companies
2006 mergers and acquisitions |
286550 | https://en.wikipedia.org/wiki/Safety-critical%20system | Safety-critical system | A safety-critical system (SCS) or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes:
death or serious injury to people
loss or severe damage to equipment/property
environmental harm
A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive (HSE) in the United Kingdom.
Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based.
Reliability regimes
Several reliability regimes for safety-critical systems exist:
Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators, the gas thermostats in most home furnaces, and passively safe nuclear reactors. Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the U.S. nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the Fail-deadly behavior of the Perimeter system built during the Soviet era.
Fail-soft systems are able to continue operating on an interim basis with reduced efficiency in case of failure. Most spare tires are an example of this: They usually come with certain restrictions (e.g. a speed restriction) and lead to lower fuel economy. Another example is the "Safe Mode" found in most Windows operating systems.
Fail-safe systems become safe when they cannot operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and ceases pumping, it will not threaten the loss of life because its safety interval is long enough to permit a human response. In a similar vein, an industrial or domestic burner controller can fail, but must fail in a safe mode (i.e. turn combustion off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because if the communications systems fail, launch cannot be commanded. Railway signaling is designed to be fail-safe.
Fail-secure systems maintain maximum security when they cannot operate. For example, while fail-safe electronic doors unlock during power failures, fail-secure ones will lock, keeping an area secure.
Fail-Passive systems continue to operate in the event of a system failure. An example includes an aircraft autopilot. In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing.
Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors. The normal method to tolerate faults is to have several computers continually test the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are replaced or repaired at normal maintenance intervals, these systems are considered safe. The computers, power supplies and control terminals used by human beings must all be duplicated in these systems in some fashion.
Software engineering for safety-critical systems
Software engineering for safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as FAA requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements. All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.
Examples of safety-critical systems
Infrastructure
Circuit breaker
Emergency services dispatch systems
Electricity generation, transmission and distribution
Fire alarm
Fire sprinkler
Fuse (electrical)
Fuse (hydraulic)
Life support systems
Telecommunications
Burner Control systems
Medicine
The technology requirements can go beyond avoidance of failure, and can even facilitate medical intensive care (which deals with healing patients), and also life support (which is for stabilizing patients).
Heart-lung machines
Mechanical ventilation systems
Infusion pumps and Insulin pumps
Radiation therapy machines
Robotic surgery machines
Defibrillator machines
Pacemaker devices
Dialysis machines
Devices that electronically monitor vital functions (electrography; especially, electrocardiography, ECG or EKG, and electroencephalography, EEG)
Medical imaging devices (X ray, computerized tomography- CT or CAT, different magnetic resonance imaging- MRI- techniques, positron emission tomography- PET)
Even healthcare information systems have significant safety implications
Nuclear engineering
Nuclear reactor control systems
Recreation
Amusement rides
Climbing equipment
Parachutes
Scuba equipment
Diving rebreather
Dive computer (depending on use)
Transport
Railway
Railway signalling and control systems
Platform detection to control train doors
Automatic train stop
Automotive
Airbag systems
Braking systems
Seat belts
Power Steering systems
Advanced driver-assistance systems
Electronic throttle control
Battery management system for hybrids and electric vehicles
Electric park brake
Shift by wire systems
Drive by wire systems
Park by wire
Aviation
Air traffic control systems
Avionics, particularly fly-by-wire systems
Radio navigation RAIM
Engine control systems
Aircrew life support systems
Flight planning to determine fuel requirements for a flight
Spaceflight
Human spaceflight vehicles
Rocket range launch safety systems
Launch vehicle safety
Crew rescue systems
Crew transfer systems
See also
(risk analysis software)
High integrity software
Real-time computing
References
External links
An Example of a Life-Critical System
Safety-critical systems Virtual Library
Explanation of Fail Operational and Fail Passive in Avionics
Engineering failures
Formal methods
Software quality
Safety
Risk analysis
Safety engineering
Computer systems
Control engineering |
155558 | https://en.wikipedia.org/wiki/Sandia%20National%20Laboratories | Sandia National Laboratories | The Sandia National Laboratories (SNL) is one of three National Nuclear Security Administration research and development laboratories in the United States, managed and operated by the National Technology and Engineering Solutions of Sandia (a wholly owned subsidiary of Honeywell International). Their primary mission is to develop, engineer, and test the non-nuclear components of nuclear weapons and high technology. Headquartered on Kirtland Air Force Base in Albuquerque, New Mexico, it also has a campus in Livermore, California, next to Lawrence Livermore National Laboratory, and a test facility in Waimea, Kauai, Hawaii.
It is Sandia's mission to maintain the reliability and surety of nuclear weapon systems, conduct research and development in arms control and nonproliferation technologies, and investigate methods for the disposal of the United States' nuclear weapons program's hazardous waste. Other missions include research and development in energy and environmental programs, as well as the surety of critical national infrastructures. In addition, Sandia is home to a wide variety of research including computational biology, mathematics (through its Computer Science Research Institute), materials science, alternative energy, psychology, MEMS, and cognitive science initiatives. Sandia formerly hosted ASCI Red, one of the world's fastest supercomputers until its decommission in 2006, and now hosts ASCI Red Storm, originally known as Thor's Hammer. Sandia is also home to the Z Machine. The Z Machine is the largest X-ray generator in the world and is designed to test materials in conditions of extreme temperature and pressure. It is operated by Sandia National Laboratories to gather data to aid in computer modeling of nuclear weapons. In December 2016, it was announced that National Technology and Engineering Solutions of Sandia, under the direction of Honeywell International, would take over the management of Sandia National Laboratories starting on May 1, 2017.
Lab history
Sandia National Laboratories' roots go back to World War II and the Manhattan Project. Prior to the United States formally entering the war, the U.S. Army leased land near an Albuquerque, New Mexico airport known as Oxnard Field, to service transient Army and U.S. Navy aircraft. In January 1941 construction began on the Albuquerque Army Air Base, leading to establishment of the Bombardier School-Army Advanced Flying School near the end of the year. Soon thereafter it was renamed Kirtland Field, after early Army military pilot Colonel Roy C. Kirtland, and in mid-1942 the Army acquired Oxnard Field. During the war years facilities were expanded further and Kirtland Field served as a major Army Air Forces training installation.
In the many months leading up to successful detonation of the first atomic bomb, the Trinity test, and delivery of the first airborne atomic weapon, Project Alberta, J. Robert Oppenheimer, Director of Los Alamos Laboratory, and his technical advisor, Hartly Rowe, began looking for a new site convenient to Los Alamos for the continuation of weapons development especially its non-nuclear aspects. They felt a separate division would be best to perform these functions. Kirtland had fulfilled Los Alamos' transportation needs for both the Trinity and Alberta projects, thus, Oxnard Field was transferred from the jurisdiction of the Army Air Corps to the U.S. Army Service Forces Chief of Engineer District, and thereafter, assigned to the Manhattan Engineer District. In July 1945, the forerunner of Sandia Laboratory, known as "Z" Division, was established at Oxnard Field to handle future weapons development, testing, and bomb assembly for the Manhattan Engineer District. The District-directive calling for establishing a secure area and construction of "Z" Division facilities referred to this as "Sandia Base" , after the nearby Sandia Mountains — apparently the first official recognition of the "Sandia" name.
Sandia Laboratory was operated by the University of California until 1949, when President Harry S. Truman asked Western Electric, a subsidiary of American Telephone and Telegraph (AT&T), to assume the operation as an "opportunity to render an exceptional service in the national interest." Sandia Corporation, a wholly owned subsidiary of Western Electric, was formed on October 5, 1949, and, on November 1, 1949, took over management of the Laboratory. The United States Congress designated Sandia Laboratories as a National laboratory in 1979. In October 1993, Sandia National Laboratories (SNL) was managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin. As of May 2017, management of Sandia National Laboratory transitioned to National Technology and Engineering Solutions of Sandia, a wholly owned subsidiary of Honeywell International, covering government-owned facilities in Albuquerque, New Mexico (SNL/NM); Livermore, California (SNL/CA); Tonopah, Nevada; Shoreview, Minnesota; and Kauai, Hawaii. SNL/NM is headquarters and the largest laboratory, employing more than 6,600 employees, while SNL/CA is a smaller laboratory, with about 850 employees. Tonopah and Kauai are occupied on a "campaign" basis, as test schedules dictate.
Sandia led a project that studied how to decontaminate a subway system in the event of a biological weapons attack (such as anthrax). As of September 2017, the process to decontaminate subways in such an event is "virtually ready to implement," said a lead Sandia engineer.
Sandia's integration with its local community includes a program through the Department of Energy's Tribal Energy program to deliver alternative renewable power to remote Navajo communities, spearheaded by senior engineer Sandra Begay.
Legal issues
On February 13, 2007, a New Mexico State Court found Sandia Corporation liable for $4.7 million in damages for the firing of a former network security analyst, Shawn Carpenter, who had reported to his supervisors that hundreds of military installations and defense contractors' networks were compromised and sensitive information was being stolen including hundreds of sensitive Lockheed documents on the Mars Reconnaissance Orbiter project. When his supervisors told him to drop the investigation and do nothing with the information, he went to intelligence officials in the United States Army and later the Federal Bureau of Investigation to address the national security breaches. When Sandia managers discovered his actions months later, they revoked his security clearance and fired him.
In 2014 an investigation determined Sandia Corp. used lab operations funds to pay for lobbying related to the renewal of its $2 billion contract to operate the lab. Sandia Corp. and its parent company, Lockheed Martin, agreed to pay a $4.8 million fine.
Technical areas
SNL/NM consists of five technical areas (TA) and several additional test areas. Each TA has its own distinctive operations; however, the operations of some groups at Sandia may span more than one TA, with one part of a team working on a problem from one angle, and another subset of the same team located in a different building or area working with other specialized equipment. A description of each area is given below.
TA-I operations are dedicated primarily to three activities the design, research, and development of weapon systems; limited production of weapon system components; and energy programs. TA-I facilities include the main library and offices, laboratories, and shops used by administrative and technical staff.
TA-II is a facility that was established in 1948 for the assembly of chemical high explosive main charges for nuclear weapons and later for production scale assembly of nuclear weapons. Activities in TA-II include the decontamination, decommissioning, and remediation of facilities and landfills used in past research and development activities. Remediation of the Classified Waste Landfill which started in March 1998, neared completion in FY2000. A testing facility, the Explosive Component Facility, integrates many of the previous TA-II test activities as well as some testing activities previously performed in other remote test areas. The Access Delay Technology Test Facility is also located in TA-II.
TA-III is adjacent to and south of TA-V [both are approximately seven miles (11 km) south of TA-I]. TA-III facilities include extensive design-test facilities such as rocket sled tracks, centrifuges and a radiant heat facility. Other facilities in TA-III include a paper destructor, the Melting and Solidification Laboratory and the Radioactive and Mixed Waste Management Facility (RMWMF). RMWMF serves as central processing facility for packaging and storage of low-level and mixed waste. The remediation of the Chemical Waste Landfill, which started in September 1998, is an ongoing activity in TA-III.
TA-IV, located approximately 1/2 mile (1 km) south of TA-I, consists of several inertial-confinement fusion research and pulsed power research facilities, including the High Energy Radiation Megavolt Electron Source (Hermes-III), the Z Facility, the Short Pulsed High Intensity Nanosecond X-Radiator (SPHINX) Facility, and the Saturn Accelerator. TA-IV also hosts some computer science and cognition research.
TA-V contains two research reactor facilities, an intense gamma irradiation facility (using cobalt-60 and caesium-137 sources), and the Hot Cell Facility.
SNL/NM also has test areas outside of the five technical areas listed above. These test areas, collectively known as Coyote Test Field, are located southeast of TA-III and/or in the canyons on the west side of the Manzanita Mountains. Facilities in the Coyote Canyon Test Field include the Solar Tower Facility (34.9623 N, 106.5097 W), the Lurance Canyon Burn Site and the Aerial Cable Facility.
Open-source software
In the 1970s, the Sandia, Los Alamos, Air Force Weapons Laboratory Technical Exchange Committee initiated the development of the SLATEC library of mathematical and statistical routines, written in FORTRAN 77.
Today, Sandia National Laboratories is home to several open-source software projects:
FCLib (Feature Characterization Library) is a library for the identification and manipulation of coherent regions or structures from spatio-temporal data. FCLib focuses on providing data structures that are "feature-aware" and support feature-based analysis. It is written in C and developed under a "BSD-like" license.
LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics library that can be used to model parallel atomic/subatomic processes at large scale. It is produced under the GNU General Public License (GPL) and distributed on the Sandia National Laboratories website as well as SourceForge.
LibVMI is a library for simplifying the reading and writing of memory in running virtual machines, a technique known as virtual machine introspection. It is licensed under the GNU Lesser General Public License.
MapReduce-MPI Library is an implementation of MapReduce for distributed-memory parallel machines, utilizing the Message Passing Interface (MPI) for communication. It is developed under a modified Berkeley Software Distribution license.
MultiThreaded Graph Library (MTGL) is a collection of graph-based algorithms designed to take advantage of parallel, shared-memory architectures such as the Cray XMT, Symmetric Multiprocessor (SMP) machines, and multi-core workstations. It is developed under a BSD License.
ParaView is a cross-platform application for performing data analysis and visualization. It is a collaborative effort, developed by Sandia National Laboratories, Los Alamos National Laboratories, and the United States Army Research Laboratory, and funded by the Advanced Simulation and Computing Program. It is developed under a BSD license.
Pyomo is a python-based optimization Mathematical Programming Language which supports most commercial and open-source solver engines.
Soccoro, a collaborative effort with Wake Forest and Vanderbilt Universities, is object-oriented software for performing electronic-structure calculations based on density-functional theory. It utilizes libraries such as MPI, BLAS, and LAPACK and is developed under the GNU General Public License.
Titan Informatics Toolkit is a collection of cross-platform libraries for ingesting, analyzing, and displaying scientific and informatics data. It is a collaborative effort with Kitware, Inc., and uses various open-source components such as the Boost Graph Library. It is developed under a New BSD license.
Trilinos is an object-oriented library for building scalable scientific and engineering applications, with a focus on linear algebra techniques. Most Trilinos packages are licensed under a Modified BSD License.
Xyce is an open source, SPICE-compatible, high-performance analog circuit simulator, capable of solving extremely large circuit problems.
Charon is a TCAD simulator which was open-sourced by Sandia in 2020. It is significant as previously there were no major TCAD simulators for large-scale simulations that were open source.
In addition, Sandia National Laboratories collaborates with Kitware, Inc. in developing the Visualization Toolkit (VTK), a cross-platform graphics and visualization software suite. This collaboration has focused on enhancing the information visualization capabilities of VTK and has in turn fed back into other projects such as ParaView and Titan.
Self-guided bullet
On January 30, 2012, Sandia announced that it successfully test-fired a self-guided dart that can hit targets at . The dart is long, has its center of gravity at the nose, and is made to be fired from a small-caliber smoothbore gun. It is kept straight in flight by four electromagnetically actuated fins encased in a plastic puller sabot that falls off when the dart leaves the bore. The dart cannot be fired from conventional rifled barrels because the gyroscopic stability provided by rifling grooves for regular bullets would prevent the self-guided bullet from reliably turning towards a target when in flight, so fins are responsible for stabilizing rather than spinning. A laser designator marks a target, which is tracked by the dart's optical sensor and 8-bit CPU. The guided projectile is kept cheap because it does not need an inertial measurement unit, since its small size allows it to make the fast corrections necessary without the aid of an IMU. The natural body frequency of the bullet is about 30 hertz, so corrections can be made 30 times per second in flight. Muzzle velocity with commercial gunpowder is (Mach 2.1), but military customized gunpowder can increase its speed and range. Computer modeling shows that a standard bullet would miss a target at by , while an equivalent guided bullet would hit within . Accuracy increases as distances get longer, since the bullet's motions settle more the longer it is in flight.
Supercomputers
List of supercomputers that have resided at Sandia:
Intel Paragon XP/S 140, 1993 to ?
ASCI Red, 1997 to 2006
Red Storm, 2005 to 2012
Cielo, 2010 to 2016
Trinity, 2015 to Current
Astra, 2018 to Current, based on ARM processors
Attaway, 2019 to Current
See also
Titan Rain
National Renewable Energy Laboratory
Test Readiness Program
Jess programming language
VxInsight
Decontamination foam
References
Further reading
Computerworld article "Reverse Hacker Case Gets Costlier for Sandia Labs"
San Jose Mercury News article "Ill Lab Workers Fight For Federal Compensation"
Wired Magazine article "Linkin Park's Mysterious Cyberstalker"
Slate article "Stalking Linkin Park"
FedSmith.com article "Linkin Park, Nuclear Research and Obsession"
The Santa Fe New Mexican article "Judge Upholds $4.3 Million Jury Award to Fired Sandia Lab Analyst"
TIME article "A Security Analyst Wins Big in Court"
The Santa Fe New Mexican article "Jury Awards Fired Sandia Analyst $4.3 Million"
HPCwire article "Sandia May Unwittingly Have Sold Supercomputer to China"
Federal Computer Weekly article "Intercepts: Chinese Checkers"
Congressional Research Service report "China: Suspected Acquisition of U.S. Nuclear Weapon Secrets"
Sandia National Laboratory Cooperative Monitoring Center article "Engagement with China"
BBC News "Security Overhaul at US Nuclear Labs"
Fox News "Iowa Republican Demands Tighter Nuclear Lab Security"
UPI article "Workers Get Bonus After Being Disciplined"
IndustryWeek article "3D Silicon Photonic Lattice"
October 6, 2005 The Santa Fe New Mexican article "Sandia Security Managers Recorded Workers' Calls"
May 17, 2002 New Mexico Business Weekly article "Sandia National Laboratories Says it's Worthless"
External links
DOE Laboratory Fact Sheet
Economy of Albuquerque, New Mexico
Nuclear weapons infrastructure of the United States
United States Department of Energy national laboratories
Federally Funded Research and Development Centers
Laboratories in the United States
Supercomputer sites
Weapons manufacturing companies
Honeywell
Lockheed Martin
Livermore, California
Military research of the United States
1949 establishments in New Mexico |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.