id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
40848048 | https://en.wikipedia.org/wiki/2013%20Las%20Vegas%20Bowl | 2013 Las Vegas Bowl | The 2013 Las Vegas Bowl was an American college football bowl game that was played on December 21, 2013, at Sam Boyd Stadium in Whitney, Nevada, in the Las Vegas Valley. The 22nd annual Las Vegas Bowl, it featured the Mountain West Conference champion Fresno State Bulldogs against the USC Trojans of the Pac-12 Conference. The game started at 12:30 p.m. PST and aired on ABC and Sports USA Radio. It was one of the 2013–14 bowl games that concluded the 2013 FBS football season. Sponsored by motor oil manufacturer Royal Purple, the game was officially known as the Royal Purple Las Vegas Bowl. The Trojans won by a score of 45–20.
Teams
Fresno State Bulldogs
Continuing their conference success from the previous season which saw them finish 7–1 and sharing their conference title with Boise State, the Bulldogs did even better in the 2013 season. Already having won the Mountain West Conference's West Division championship, the Bulldogs went on to win the first Mountain West Conference Championship Game, defeating Utah State by a score of 24–17, advancing to the Las Vegas Bowl by virtue of their victory.
This will be the Bulldogs' second Las Vegas Bowl; they had previously made the bowl in 1999, losing to the Utah Utes by a last-second field goal for a score of 17–16.
USC Trojans
The season was tumultuous by USC standards, having seen head coach Lane Kiffin fired five games into the season, and then interim head coach Ed Orgeron resigning at season's end (Clay Helton will serve as interim coach for the bowl game while Steve Sarkisian takes over for the 2014 season). Nonetheless, the Trojans still managed a 6–3 conference and 9–4 overall record (tied with the rival UCLA Bruins for second in the Pac-12 South Division), leading them to the Las Vegas Bowl at season's end.
This will be USC's second Las Vegas Bowl; they had previously made the bowl in 2001, losing to future conference foe the Utah Utes by a score of 10–6.
Game summary
Scoring summary
Statistics
Notes
Each school was allotted 11,000 tickets
Cody Kessler was named the MVP in the game
References
Las Vegas Bowl
Las Vegas Bowl
Fresno State Bulldogs football bowl games
USC Trojans football bowl games
Las Nevada
December 2013 sports events in the United States |
2403266 | https://en.wikipedia.org/wiki/Digital%20Eel | Digital Eel | Digital Eel is a self-funded independent video game development team located in the Seattle, Washington area. Digital Eel is best known for its Infinite Space series of space roguelikes.
History
The group was formed in 2001 by Rich Carlson (Ion Storm, Looking Glass Studios), Iikka Keränen (Looking Glass Studios, Valve) and Bill "Phosphorus" Sears (KnowWonder, GameHouse).
In April 2013, Digital Eel announced plans for the third installment of the Infinite Space series, Infinite Space III: Sea of Stars, and turned to Kickstarter.com to crowdfund the project. Funding was successful.
Developers
Rich Carlson – design, sound, music and art
Iikka Keränen – design, code and art
Bill "Phosphorus" Sears (deceased) – artist, music and design
Henry Kropf – code, macOS expert
Chris Collins - code, macOS expert, Android expert
Games
Weird Worlds: Return to Infinite Space (Android, iPad, iPhone, 2021)
Strange Adventures in Infinite Space reissue (Linux, macOS, Windows, 2020)
Goblin Slayer Third Edition (boardgame, 2019)
Protagon (VR game, HTC Vive/Windows, 2017)
Infinite Space Battle Poker (card game, 2016)
Pairs: Infinite Space (card game, 2016)
Infinite Space III: Sea of Stars (Windows, Mac, 2015)
Eat Electric Death! (boardgame, 2013)
Infinite Space Explorers: X-1 Expansion (card game, 2012)
Infinite Space Explorers (card game, 2012)
Data Jammers: FastForward (Windows, Mac, 2011 & 2015)
Space Ludo (boardgame, 2009)
BrainPipe: A Plunge to Unhumanity (Windows, Mac & iPhone, 2008 & 2009)
Goblin Slayer (boardgame, 2008)
Soup du Jour (Windows & iPad, 2007 & 2011)
Eat Electric Death! (boardgame, 2007 but shelved by publisher)
Weird Worlds: Return to Infinite Space (Windows, Mac, 2005, 2006, 2011 & 2014)
Diceland Space: Terrans vs. Urluquai (setting, ship types & art, tabletop game, 2005)
Diceland Space: Garthans vs. Muktians (setting, ship types & art, tabletop game, 2005)
Mac OS X Boiler Plate Special (Mac, 2004)
Digital Eel's Big Box of Blox (Windows, Mac, handhelds, smartphones, 2003–2008)
Dr. Blob's Organism (Windows & Mac, 2003)
Strange Adventures in Infinite Space (Windows, Mac & handhelds, 2002-2021)
Plasmaworm (Windows, July 17, 2001)
Reception
Digital Eel is best known for its Infinite Space series of space roguelikes, Strange Adventures in Infinite Space (2002), Weird Worlds: Return to Infinite Space (2005) and Infinite Space III: Sea of Stars (2015). Strange Adventures and Weird Worlds pioneered the space roguelike subgenre, inspiring later efforts like FTL: Faster Than Light.
Awards
Excellence in Audio: Brainpipe (IGF, 2009)
Innovation in Audio: Weird Worlds: Return to Infinite Space (IGF, 2006)
Seumas McNally Grand Prize finalist: Weird Worlds: Return to Infinite Space (IGF, 2006)
Quest/Adventure Game of the Year: Weird Worlds: Return to Infinite Space (Game Tunnel, 2005)
Innovation in Visual Art: Dr. Blob's Organism (IGF, 2004)
Innovation in Audio: Dr. Blob's Organism (IGF, 2004)
References
External links
Companies based in Seattle
Independent video game developers
Video game companies of the United States
Video game development companies
Video game companies established in 2001 |
21249207 | https://en.wikipedia.org/wiki/Trellix | Trellix | Trellix (formerly FireEye and McAfee Enterprise) is a privately held cybersecurity company founded in 2004. It has been involved in the detection and prevention of major cyber attacks.
It provides hardware, software, and services to investigate cybersecurity attacks, protect against malicious software, and analyze IT security risks.
Initially, it focused on developing virtual machines to download and test internet traffic before transferring it to a corporate or government network. The company diversified over time, in part through acquisitions. In 2014, it acquired Mandiant, which provides incident response services following the identification of a security breach. FireEye went public in 2013, and remained so until 2021. USAToday says FireEye "has been called in to investigate high-profile attacks against Target, JP Morgan Chase, Sony Pictures, Anthem, and others".
In June 2021, FireEye sold its name and products business to Symphony Technology Group (STG) for $1.2 billion. STG combined FireEye with its acquisition of McAfee's enterprise business to launch Trellix, an extended detection and response (XDR) company.
History
FireEye was founded in 2004 by Ashar Aziz, a former Sun Microsystems engineer. FireEye's first commercial product was not developed and sold until 2010. That same year, FireEye expanded into the Middle East. This was followed by the opening of new offices in Asia Pacific in 2010, Europe in 2011 and Africa in 2013.
In December 2012, founder Aziz stepped down as CEO and former McAfee CEO David DeWalt was appointed to the position. DeWalt was recruited in order to prepare the company for an initial public offering (IPO). The following year, FireEye raised an additional $50 million in venture capital, bringing its total funding to $85 million. In late 2013, FireEye went public, raising $300 million.
At the time, FireEye was growing rapidly. It had 175 employees in 2011, which grew to 900 by June 2013. Revenues multiplied eight-fold between 2010 and 2012. However, FireEye was not yet profitable, due to high operating costs such as research and development expenses.
In December 2013, FireEye acquired Mandiant for $1 billion. Mandiant was a private company founded in 2004 by Kevin Mandia that provided incident response services in the event of a data security breach. Mandiant was known for investigating high-profile hacking groups. Before the acquisition, FireEye would often identify a security breach, then partner with Mandiant to investigate who the hackers were. Mandiant became a subsidiary of FireEye.
In late 2014, FireEye initiated a secondary offering, selling another $1.1 billion in shares, in order to fund development of a wider range of products. Shortly afterward, FireEye acquired another data breach investigation company, nPulse, for approximately $60 million. By 2015, FireEye was making more than $100 million in annual revenue, but was still unprofitable, largely due to research and development spending.
In January 2016, FireEye acquired iSIGHT Partners for $275 million. iSIGHT was a threat intelligence company that gathered information about hacker groups and other cybersecurity risks. This was followed by the acquisition of Invotas, an IT security automation company. DeWalt stepped down as CEO in 2016 and was replaced by Mandiant CEO and former FireEye President Kevin Mandia. Afterwards, there was a downsizing and restructuring in response to lower-than-expected sales, resulting in a layoff of 300-400 employees. Profit and revenue increased on account of shifts to a subscription model and lower costs.
In March 2021, Symphony Technology Group (STG) acquired McAfee Enterprise for $4 billion. In June 2021, FireEye announced the sale of its products business and name to STG for $1.2 billion. The sale split off its cyber forensics unit, Mandiant, and the FireEye stock symbol FEYE was relaunched as MNDT on the NASDAQ on 5 October 2021. On January 18, 2022, STG announced the launch of Trellix, an extended detection and response company, which is a combination of FireEye and the McAfee enterprise business. On 30 September 2021, STG announced Bryan Palma as CEO of the combined company.
Products and services
FireEye started out as a "sandboxing" company. Sandboxing is where incoming network traffic is opened within a virtual machine to test it for malicious software, before being introduced into the network. FireEye's products diversified over time, in part through acquisitions. In 2017, FireEye transitioned from primarily selling appliances, to a software-as-a-service model.
FireEye sells technology products including network, email, and endpoint security, a platform for managing security operations centers called Helix, consulting services primarily based on incident response, and threat intelligence products.
The Central Management System (CMS) consolidates the management, reporting, and data sharing of Web MPS (Malware Protection System), Email MPS, File MPS, and Malware Analysis System (MAS) into a single network-based appliance by acting as a distribution hub for malware security intelligence.
The FireEye Cloud crowd-sources Dynamic Threat Intelligence (DTI) detected by individual FireEye MPS appliances, and automatically distributes this time sensitive zero-day intelligence globally to all subscribed customers in frequent updates. Content Updates include a combination of DTI and FireEye Labs generated intelligence identified through research efforts.
As of its inception in January 2022, Trellix has more than 40,000 customers, 5,000 employees, and $2 billion in annual revenue. Trellix includes the endpoint, cloud, collaboration, data and user, application, and infrastructure security capabilities of FireEye and McAfee. The business focuses on threat detection and response using machine learning and automation, with security technology that has the capability to learn and adapt in order to combat advanced threats.
Operations
FireEye has been known for uncovering high-profile hacking groups.
2008-2014
In October/November 2009, FireEye participated in an effort to take down the Mega-D botnet (also known as Ozdok). On March 16, 2011, the Rustock botnet was taken down through an action by Microsoft, US federal law enforcement agents, FireEye, and the University of Washington. In July 2012, FireEye was involved in analysis of the Grum botnet's command and control servers located in the Netherlands, Panama, and Russia.
In 2013, Mandiant (before being acquired by FireEye) uncovered a multi-year espionage effort by a Chinese hacking group called APT1.
In 2014, the FireEye Labs team identified two new zero-day vulnerabilities – – as part of limited, targeted attacks against major corporations. Both zero-days exploit the Windows kernel. Microsoft addressed the vulnerabilities in their October 2014 Security Bulletin. Also in 2014, FireEye provided information on a threat group it calls FIN4. FIN4 appears to conduct intrusions that are focused on a single objective: obtaining access to insider information capable of making or breaking the stock prices of public companies. The group has targeted hundreds of companies, and specifically targets the emails of C-level executives, legal counsel, regulatory, risk, and compliance personnel, and other individuals who would regularly discuss confidential, market-moving information. Also in 2014, FireEye released a report focused on a threat group it refers to as APT28. APT28 focuses on collecting intelligence that would be most useful to a government. FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries, and security organizations that would likely benefit the Russian government.
2015
In 2015, FireEye confirmed the existence of at least 14 router implants spread across four different countries: Ukraine, Philippines, Mexico, and India. Referred to as SYNful Knock, the implant is a stealthy modification of the router’s firmware image that can be used to maintain persistence within a victim’s network.
In September 2015, FireEye obtained an injunction against a security researcher attempting to report vulnerabilities in FireEye Malware Protection System.
In 2015, FireEye uncovered an attack exploiting two previously unknown vulnerabilities, one in Microsoft Office () and another in Windows (). The attackers hid the exploit within a Microsoft Word document (.docx) that appeared to be a résumé. The combination of these two exploits grant fully privileged remote code execution. Both vulnerabilities were patched by Microsoft.
In 2015, the FireEye as a Service team in Singapore uncovered a phishing campaign exploiting an Adobe Flash Player zero-day vulnerability (). Adobe released a patch for the vulnerability with an out-of-band security bulletin. FireEye attributed the activity to a China-based threat group it tracks as APT3.
2016
In 2016, FireEye announced that it has been tracking a pair of cybercriminals referred to as the “Vendetta Brothers.” The company said that the enterprising duo uses various strategies to compromise point-of-sale systems, steal payment card information and sell it on their underground marketplace “Vendetta World.”
In mid-2016, FireEye released a report on the impact of the 2015 agreement between former U.S. President Barack Obama and China's paramount leader Xi Jinping that neither government would “conduct or knowingly support cyber-enabled theft of intellectual property” for an economic advantage. The security firm reviewed the activity of 72 groups that it suspects are operating in China or otherwise support Chinese state interests and determined that, as of mid-2014, there was an overall decrease in successful network compromises by China-based groups against organizations in the U.S. and 25 other countries.
In 2016, FireEye announced that it had identified several versions of an ICS-focused malware – dubbed IRONGATE – crafted to manipulate a specific industrial process running within a simulated Siemens control system environment. Although Siemens Product Computer Emergency Readiness Team (ProductCERT) confirmed to FireEye that IRONGATE is not viable against operational Siemens control systems and that IRONGATE does not exploit any vulnerabilities in Siemens products, the security firm said that IRONGATE invokes ICS attack concepts first seen in Stuxnet.
On May 8, 2016, FireEye detected an attack exploiting a previously unknown vulnerability in Adobe Flash Player (). The security firm reported the issue to the Adobe Product Security Incident Response Team (PSIRT) and Adobe released a patch for the vulnerability just four days later.
In 2016, FireEye discovered a widespread vulnerability affecting Android devices that permits local privilege escalation to the built-in user “radio”, making it so an attacker can potentially perform activities such as viewing the victim’s SMS database and phone history. FireEye reached out to Qualcomm in January 2016 and subsequently worked with the Qualcomm Product Security Team to address the issue.
In 2016, FireEye provided details on FIN6, a cyber criminal group that steals payment card data for monetization from targets predominately in the hospitality and retail sectors. The group was observed aggressively targeting and compromising point-of-sale (POS) systems, and making off with millions of payment card numbers that were later sold on an underground marketplace.
2017-2019
In 2017, FireEye detected malicious Microsoft Office RTF documents leveraging a previously undisclosed vulnerability, . This vulnerability allows a malicious actor to download and execute a Visual Basic script containing PowerShell commands when a user opens a document containing an embedded exploit. FireEye shared the details of the vulnerability with Microsoft and coordinated public disclosure timed with the release of a patch by Microsoft to address the vulnerability.
In 2018, FireEye helped Facebook identify 652 fake accounts.
2020-2021
FireEye revealed on Tuesday, December 8, 2020 that its own systems were pierced by what it called "a nation with top-tier offensive capabilities". The company said the attackers used "novel techniques" to steal copies of FireEye's red team tool kit, which the attackers could potentially use in other attacks. The same day, FireEye published countermeasures against the tools that had been stolen.
A week later in December 2020, FireEye reported the SolarWinds supply chain attack to the U.S. National Security Agency (NSA), the federal agency responsible for defending the U.S. from cyberattacks, and said its tools were stolen by the same actors. The NSA is not known to have been aware of the attack before being notified by FireEye. The NSA uses SolarWinds software itself.
Within a week of FireEye's breach, cyber-security firm McAfee said the stolen tools had been used in at least 19 countries, including the US, the UK, Ireland, the Netherlands, and Australia.
During continued investigation of the hack of their data and that of federal agencies revealed on December 8, 2020, FireEye reported in early January that the hacks originated from inside the USA, sometimes very close to the facilities affected, which enabled the hackers to evade surveillance by the National Security Agency and the defenses used by the department of Homeland Security.
2022
A 2022 report by Trellix noted that hacking groups Wicked Panda (linked to China) and Cozy Bear (linked to Russia) were behind 46% of all state-sponsored hacking campaigns in the third quarter of 2021, and that in a third of all state-sponsored cyber attacks, the hackers abused Cobalt Strike security tools to get access to the victim's network. In a January 2022 report on Fox News, Trellix CEO Bryan Palma stated that there is an increasing level of cyberwarfare threats from Russia and China.
A 2022 Trellix report stated that hackers are using Microsoft OneDrive in an espionage campaign against government officials in Western Asia. The malware, named by Trellix as Graphite, employs Microsoft Graph to use OneDrive as a command and control server and execute the malware. The attack is split into multiple stages in order to remain hidden for as long as possible.
Acquisitions
References
External links
Computer security companies specializing in botnets
Computer companies of the United States
Companies based in Milpitas, California
Computer forensics
Companies listed on the Nasdaq
American companies established in 2004
2013 initial public offerings |
21066263 | https://en.wikipedia.org/wiki/Seer%20Systems | Seer Systems |
Seer Systems developed the world's first commercial software synthesizer in the early 1990s. Working in conjunction with Intel, then Creative Labs, and finally as an independent software developer and retailer, Seer helped lay the groundwork for a major shift in synthesis technology: using personal computers, rather than dedicated synthesizer keyboards, to create music.
History
Seer's founder, Stanley Jungleib, joined the staff of Sequential Circuits (creators of the groundbreaking Prophet-5 synthesizer) in 1979. Working as Publications Manager, he drafted the technical manuals for all Sequential products. Jungleib was a charter member of the International MIDI Association (which later became the MIDI Manufacturer's Association) and helped to establish the MIDI protocol.
In 1992, Jungleib was invited to teach a seminar on MIDI at Intel Architecture Labs. This led to the launching of an Intel project to create a software synthesizer for the 80486 processor. Jungleib assembled a development team, and at the end of 1992 founded Seer Systems to work on the project. The resulting synthesizer, code-named Satie, was demonstrated by Andrew Grove in his keynote speech at Comdex in 1994. Intel discontinued the project in 1995, possibly due to friction with Microsoft over Native Signal Processing.
Seer began afresh with a Pentium-based architecture. That same year, the founder of Sequential Circuits, Dave Smith, joined as President.
Seer struck a distribution deal with Creative Labs in 1996, which contributed to strong financial results for the AWE64. Over 10 million software synthesizers, the "", were shipped as a result. It was the first publicly available synthesizer to use Sondius WaveGuide technology developed at Stanford's CCRMA.
In 1997, Seer released Reality, the world's first professional software synthesizer for the PC. Reality won the 1998 Editors' Choice Award from Electronic Musician Magazine. Industry veteran Craig Anderton called it a "groundbreaking product." 1999 saw the introduction of SurReal 1.0, an affordable player for Reality and SoundFont instrument sounds, the release of Reality 1.5, which added web features, more polyphony and better sound card support, and the issuance of ("System and Method for Generating, Distributing, Storing and Performing Musical Work Files"/Inventor, Jungleib/Assignee, Seer).
But by 2000, legal struggles with hostile investors, limited distribution and piracy caused Seer to cease active development, suspend sales through retail outlets, and briefly shift to an online sales model. An unrelated company, Seer Music Systems, founded by Canadian engineer Ian Grant, acquired the distribution rights and continues to offer legacy demos and support.
Since 2003, Seer's primary focus has been upon protecting its intellectual property (the '274 patent). Over several years, and following related litigation, the technology was licensed to Beatnik (2004), Microsoft (2006) and Yamaha (2007).
Products
Reality
Announced in January 1997, Reality ran on Pentium PCs under Windows 95/98. Version 1.0 offered multiple types of synthesis, including PCM wavetable, subtractive, modal synthesis and FM, as well as physical modeling via the Sondius WaveGuide technology licensed from Stanford University. Reality was the first synthesizer able to simultaneously play multiple synthesis types on multiple MIDI channels in real-time.
Reality 1.5 was released in 1999, adding more polyphony, support for a broader range of sound cards and the ability to load and play SoundFont 2.0 samples. It also incorporated SeerMusic, enabling fast Internet playback of music files using a combination of MIDI and Reality synthesis data.
In its 2017, February issue Electronic Musician gave Seer Systems Reality a 2017 Editors’ Choice Legacy Award, terming the 1997 introduction “a game-changing product—an unprecedented achievement—that has shaped the way we make music.”
SurReal
In February 1999, Seer announced SurReal, a playback-oriented version of the Reality synthesizer engine. It was designed to be more user-friendly, and had fewer controls, but could load and play complex Reality soundbanks as well as SoundFonts. SurReal also supported SeerMusic for internet delivery.
SeerMusic
SeerMusic was introduced in January 1998. By combining MIDI performance data, synthesis parameters and sample data, music playback files could be significantly smaller than standard compressed digital audio data.
References
External links
Seer Systems official site
Seer Systems Archives: 1992–2005 — timeline
The Note Museum — distributor (see Seer Music page)
Software companies of the United States |
205868 | https://en.wikipedia.org/wiki/University%20of%20Kent | University of Kent | The University of Kent (formerly the University of Kent at Canterbury, abbreviated as UKC) is a semi-collegiate public research university based in Kent, United Kingdom. The University was granted its Royal Charter on 4 January 1965 and the following year Princess Marina, Duchess of Kent, was formally installed as the first Chancellor.
The university has its main campus north of Canterbury situated within of park land, housing over 6,000 students, as well as campuses in Medway and Tonbridge in Kent and European postgraduate centres in Brussels, Athens, Rome and Paris. The University is international, with students from 158 different nationalities and 41% of its academic and research staff being from outside the United Kingdom. It is a member of the Santander Network of European universities encouraging social and economic development.
History
Origins
A university in the city of Canterbury was first considered in 1947, when an anticipated growth in student numbers led several residents to seek the creation of a new university, including Kent. However, the plans never came to fruition. A decade later both population growth and greater demand for university places led to a re-consideration. In 1959 the Education Committee of Kent County Council explored the creation of a new university, formally accepting the proposal unanimously on 24 February 1960. Two months later the Education Committee agreed to seek a site at or near Canterbury, given the historical associations of the city, subject to the support of Canterbury City Council.
By 1962 a site was found at Beverley Farm, straddling the then boundary between the City of Canterbury and the administrative county of Kent. The university's original name, chosen in 1962, was the University of Kent at Canterbury, reflecting the fact that the campus straddled the boundary between the county borough of Canterbury and Kent County Council. At the time it was the normal practice for universities to be named after the town or city whose boundaries they were in, with both "University of Kent" and "University of Canterbury" initially proposed. The name adopted reflected the support of both the city and county authorities; as well as the existence of the University of Canterbury in New Zealand, which officially opposed the use of a name too similar to its own. The abbreviation "UKC" became a popular abbreviation for the university.
1965 to 2000
The University of Kent at Canterbury was granted its Royal Charter on 4 January 1965 and the first batch of 500 students arrived in the October of that year. On 30 March 1966 Princess Marina, Duchess of Kent was formally installed as the first Chancellor.
The University was envisaged as being a collegiate establishment, with most students living in one of the colleges on campus, and as specialising in inter-disciplinary studies in all fields. Over the years, changes in government policy and other changing demands have largely destroyed this original concept, leading to the present state, which is nearer the norm for a British university. However, the four original colleges – Darwin, Eliot, Keynes and Rutherford – remain, together with the newer Woolf and Turing colleges, each with their own masters.
The university grew at a rapid rate throughout the 1960s, with three colleges and many other buildings on campus being completed by the end of the decade. The 1970s saw further construction, but the university also encountered the biggest physical problem in its history. The university had been built above a tunnel on the disused Canterbury and Whitstable Railway. In July 1974 the tunnel collapsed, damaging part of the Cornwallis Building, which sank nearly a metre within about an hour on the evening of 11 July. Fortunately, the university had insurance against subsidence, so it was able to pay for the south-west corner of the building to be demolished and replaced by a new wing at the other end of the building.
Unix computers arrived in 1976 and UKC set up the first Unix to Unix copy (UUCP) test service to Bell Labs in the U.S. in 1979. UKC provided the first UUCO connections to non-academic users in the UK in the early 1980s.
In 1982 the university opened the University Centre at Tonbridge (now the University of Kent at Tonbridge) for its School of Continuing education, helping to enhance the availability of teaching across the county. Building elsewhere included the Park Wood accommodation village and the Darwin houses in 1989.
During the 1990s and 2000s the University expanded beyond its original campus, establishing campuses in Medway, Tonbridge and Brussels, and partnerships with Canterbury College, West Kent College, South Kent College and MidKent College.
2000 to present
In the 2000s the university entered a collaboration named Universities at Medway with the University of Greenwich, MidKent College and Canterbury Christ Church University to deliver university provision in the Medway area. This led to the development of the University of Kent at Medway, opened from 2001. Initially based at Mid-Kent College, a new joint campus opened in 2004. Small postgraduate centres opened in Paris in 2009, and later in Rome and Athens.
As a consequence of the expansion outside Canterbury the university's name was formally changed to the University of Kent on 1 April 2003. Part of the original reasoning for the name disappeared when local government reforms in the 1970s resulted in the Canterbury campus falling entirely within the City of Canterbury, which no longer has county borough status, and Kent County Council.
In 2007 the university was rebranded with a new logo and website. The logo was chosen following consultation with existing university students and those in sixth forms across the country.
The University of Kent set its tuition fees for UK and European Union undergraduates at £9,000 for new entrants in 2012, which was approved by the Office for Fair Access (OFFA). The fee was approved by Council on 1 April 2011 and was confirmed by OFFA in July 2011. The proposed changes to UK and EU undergraduate tuition fees did not apply to international student fees.
Following the extension of Keynes College in 2001, two new colleges opened on the Canterbury campus, Woolf College for postgraduates in 2008 and Turing College for undergraduates in 2015. Several other new buildings were also added, including the Jarman School of Arts Building in 2009, the Colyer-Fergusson Music Building, a performing arts space, in 2012, and the Sibson building, housing maths and the business school, in 2017. A major £27m project to extend and refurbish the Templeman Library began in 2013, was completed in 2017 and formally opened in 2018. Additional accommodation was provided for students at the Medway Campus with the completion of Liberty Quays in 2009.
In 2015, the University held a number of events to celebrate its 50th anniversary. Festivals were held in Canterbury and Medway, a summer festival, the funding of twelve Beacon Projects and the temporary erection of a Ferris Wheel on the Canterbury campus. In 2016, a consultation was launched on a masterplan for future development of the Canterbury campus. In March 2017 it was announced that, in partnership with Canterbury Christ Church University, the University of Kent had been given funding to develop Kent and Medway Medical School.
Campuses
Canterbury campus
The main Canterbury campus covers and is situated in parkland in an elevated position just over two miles (3 km) from the city centre, with views over the city and Canterbury Cathedral UNESCO world heritage site. The campus currently has approximately 12,000 full-time and 6,200 part-time students, with accommodation for over 5000, in addition to 600 academic and research staff. Residential and academic buildings are intermingled in the central part of the campus, science building are clustered west of Giles Lens and there is a dedicated student village on the western edge, several minutes’ walk from the main campus. The campus is ecologically diverse and home to a number of protected species, including Great Crested Newts. The North West of the site is heavily forested, including pockets of ancient woodland, while the Southern Slopes contain a mix of wildflower and hay meadows, and there are seven ponds spread across the campus.
Facilities
The campus has a selection of shops, including a grocery store, bookshop, pharmacy and launderettes. Food and drink is provided by range of cafes and bars run either by the University or the student union. Bars include K-bar, in Keynes College, Mungo's, in Eliot College, Origins, in Darwin College, and Woody's in the Park Wood Student Village. Cafeteria style food is available in Rutherford College, fine dining at the Beagle Restaurant in Darwin College, and food is served at the bars and other cafes around campus.
The campus nightclub, The Venue, was refurbished and modernised in 2010 and is open Wednesday to Saturday. The upstairs area was originally used a live music venue, known as The Lighthouse and then the Attic, but has since been replaced with the Student Media Centre which hosts Inquire, KTV and CSR. Club nights and live music are also held at various bars on campus.
Sporting facilities are spread across two main sites: the sports centre, which contains several multi-purpose sports halls, a fitness suite, squash courts and climbing wall, and the Sports Pavilion site, with a variety of indoor and outdoor sports pitches and training facilities, including 3G and astroturf.
The Gulbenkian arts complex includes a theatre and cinema, as well as a small stage which hosts monthly comedy nights as well as occasional shows such as Jazz at Five and The Chortle Student Comedy Awards. The adjacent Colyer-Fergusson Building, which opened in 2013, includes an adaptable format concert/rehearsal hall with retractable seating and variable acoustics and practice rooms. The Gulbenkian Theatre seats 340 and presents student, professional and amateur shows throughout the year. The theatre was opened in 1969 and was named after the Calouste Gulbenkian Foundation which helped fund its construction. The Gulbenkian Cinema is an independent cinema in the Gulbenkian complex open to students and the general public. It is Kent's regional film theatre showing new mainstream and non-mainstream releases as well as archive and foreign language films. In the daytime the cinema is used as a lecture theatre for University students. The Gulbenkian complex also hosts a cafe/ bar and restaurant facility open to students, staff and the general public.
Transport and access
The campus is accessed by road from either the West, with two entrances on the A290 Whitstable Road, or the East, via St Stephen's Hill. An off-road foot and cycle route connects the central campus to the Northern edge of the city, and a regular bus services (‘UniBus’) is also in operation, although with a more limited service outside of term time. The A2 dual carriageway links the campus and city to London, the port at Dover and the national motorway network. The campus also lies at the southern end of the Crab and Winkle Way, a 7-mile off-road foot and cycle path running through farm and woodland to the coastal fishing town of Whitstable, providing a link for cycle commuters.
The closest railway station to the campus is Canterbury West which is, as of 2009, served by Southeastern services to London St Pancras. These services stop at Ashford International en route, thus providing a direct connection to Eurostar services to France and Belgium. Southeastern services also connect Canterbury West and Canterbury East stations with London Victoria and Charing Cross. Both of the Canterbury stations can be accessed by the UniBus service. The nearest international air services are provided from the London airports, Gatwick and Heathrow, with indirect National Express coach services to both from Canterbury Bus Station with one transfer at London Victoria Coach Station. The campus is also served by two coach services (Route 007) to/from London each day, with further services operating from Canterbury bus station.
Medway campus
In 2000 the University joined with other educational institutes to form the "Universities for Medway" initiative, aimed at increasing participation in higher education in the Medway Towns. The following year the University of Kent at Medway formally opened, initially based at Mid-Kent College. By 2004 a new campus for the university had been established in the old Chatham Dockyard, sharing a campus with Canterbury Christchurch University and University of Greenwich.
The University of Kent and Medway Park Leisure Centre have gone into a multimillion-pound partnership to provide high-quality leisure facilities for university students and the general public. Medway Park (formerly the Black Lion Leisure Centre) was re-opened in 2011 by Princess Anne for use as a training venue for the 2012 London Olympics, as well as a training venue for the Egyptian and Congo National teams.
The campus accommodation, called Pier Quays, formerly named Liberty Quays until 2019 when Unite Group acquired Liberty Living, was finished in late 2009, and caters for over 600 students. The accommodation building includes a Tesco Express, Subway, and Domino's Pizza, and Cargo, a bar showing sports, live music, and entertainment.
Tonbridge campus
In 1982 the university established the School of continuing education in the centre of Tonbridge, extending its coverage to the entire county of Kent. Many buildings were added in the 1980s and 1990s. The campus is now called the University of Kent at Tonbridge. It collaborates with the Kent Business School and Kent Innovation and Enterprise.
Organisation and administration
Faculties, departments and schools
Until 2020, the University was divided into three faculties, humanities, sciences and social sciences, which are further sub-divided into 20 schools:
The original plan was to have no academic sub-divisions within the three faculties (initially Humanities, Social Sciences and Natural Sciences) and to incorporate an interdisciplinary element to all degrees through common first year courses ("Part I") in each faculty, followed by specialist study in the second and final years ("Part II"). The lack of Departments encouraged the development of courses that crossed traditional divides, such as Chemical Physics, Chemistry with Control Engineering, Biological Chemistry and Environmental Physical Science.
However, the interdisciplinary approach proved increasingly complex for two reasons. The levels of specialisation at A Levels meant that many students had not studied particular subjects for some years and this made it impossible to devise a course that both covered areas unstudied by some and did not bore others. This proved an especial problem in Natural Sciences, where many Mathematics students had not studied Chemistry at A Level and vice versa.
Additionally many subjects, particularly those in the Social Sciences, were not taught at A Level and required the first year as a grounding in the subject rather than an introduction to several different new subjects. Problems were especially encountered in the Faculty of Natural Sciences where the differing demands of Mathematics and physical sciences led to two almost completely separate programmes and student bases. In 1970 this led to the creation of the School of Mathematical Studies, standing outside the Faculties. The addition of other subjects led to increased pressure on common Part I programmes and increasingly students took more specialised Part I courses designed to prepare them for Part II study.
Substantial change to this structure did not come until the 1990s, driven more by national government policy than curricular demands, which were, after all, very flexible by nature. In 1989 the Universities Funding Council, which was merged into the Higher Education Funding Council for England (HEFCE) in 1992, was charged by the UK Government to determine the cost for teaching each subject. To meet these accountancy requirements, Kent required for the first time that each member of staff declare a single discipline they would be affiliated with in future. When departments were formed in the early 1990s this led to a great deal of reorganisation of staff, and destroyed many existing inter-disciplinary relationships. Following the formation of departments, finance was devolved to departments based on how many students were taught. This quickly evolved into undermining the interdisciplinary context further, as departments sought to control finance by increasing the amount of specialist teaching in the first year.
The university now has the Faculties further divided into 18 Departments and Schools, ranging from the School of English to the Department of Biosciences, and from the Kent Law School to the Department of Economics. Also of note is the University's Brussels School of International Studies, located in Brussels, Belgium. The school offers master's degrees in international relations theory and international conflict analysis, along with an LLM in international law. In 2005 a new department, the Kent School of Architecture, began teaching its first students. In 2008, Wye College came under Kent's remit, in joint partnership with Imperial College London.
In 2020, because of financial pressures caused by a combination of the 2000 demographic dip and the 2020/21 COVID-19 pandemic, the university abolished the faculties and reorganised itself into 6 divisions (see below).
Colleges
The University is divided into seven colleges, six colleges named after distinguished scholars and one college after a town. Colleges have academic schools, lecture theatres, seminar rooms and halls of residence. Each college has a Master, who is responsible for student welfare within their college. In chronological order of construction they are:
The university also has an associate college named Chaucer College.
There was much discussion about the names adopted for most of the colleges with the following alternative names all in consideration at one point or another: for Eliot: Caxton, after William Caxton; for Keynes: Richborough, a town in Kent; Anselm, a former archbishop of Canterbury; and for Darwin: Anselm (again); Attlee, after Clement Attlee, the post-war Prime Minister; Becket, after Thomas Becket, another former archbishop (this was the recommendation of the college's provisional committee but rejected by the Senate); Conrad; Elgar, after Edward Elgar; Maitland; Marlowe, after Christopher Marlowe; Russell, after Bertrand Russell (this was the recommendation of the Senate but rejected by the council); Tyler, after both Wat Tyler and Tyler Hill on which the campus stands. The name for the College proved especially contentious and was eventually decided by a postal ballot of members of the Senate, choosing from: Attlee, Conrad, Darwin, Elgar, Maitland, Marlowe and Tyler.
(Both Becket and Tyler were eventually used as the names for residential buildings on campuses and the building housing both the Architecture and Anthropology departments is named Marlowe.)
Each college has residential rooms, lecture theatres, study rooms, computer rooms and social areas. The intention of the colleges was that they should not be just Halls of residence, but complete academic communities. Each college (except Woolf) has its own bar, all rebuilt on a larger scale, and originally its own dining hall (only Rutherford still has a functioning dining hall; Darwin's is hired out for conferences and events; Keynes's was closed in 2000 and converted into academic space, but in 2011 Dolche Vita was expanded and became the dining hall for Keynes students in catered accommodation after Keynes's expansion in 2011; and Eliot's was closed in 2006). It was expected that each college (more were planned) would have around 600 students as members, with an equivalent proportion of staff, with half the students living within the college itself and the rest coming onto campus to eat and study within their colleges. Many facilities, ranging from accommodation, tutorials and alumni relations, would be handled on a college basis. With no planned academic divisions below the Faculty level, the colleges would be main focus of students' lives and there would be no units of a similar or smaller size to provide a rival focus of loyalties.
This vision of a collegiate university has increasingly fallen away. The funding for colleges did not keep pace with the growth in student numbers, with the result that only four colleges were built. In later years when there was heavy student demand for scarce accommodation in Canterbury the solution was found in building additional on-campus accommodation but not in the form of further colleges. The hopes that students living off campus would stay around to eat dinner in their colleges were not met, whilst the abolition of college amenities fees removed students' direct stake in their colleges. With the growth of specialist subject departments as well as of other university wide facilities, more and more of the role of colleges was transferred to the central university. Accommodation and catering were transferred to the centralised University of Kent at Canterbury Hospitality (UKCH).
Today the University does not operate as a traditional collegiate university – applications are made to the University as a whole, and many of the colleges rely on each other for day-to-day operation. Academic departments have no formal ties to colleges other than those that are located within particular college buildings due to availability of space, with lectures, seminars and tutorials taking place wherever there is an available room rather than on a college basis. Many students are allocated accommodation in their respective college, but some are housed in developments with no defined collegiate link whilst others are housed in different colleges.
Despite this the six College Student Committees, volunteer groups made up of elected officers and supporting volunteers, have retained a reasonably strong presence on campus. They run fundraising events and welfare campaigns throughout the academic year, and organise student events for their colleges during Welcome Week. Every student in the University retains a college affiliation to either Keynes, Eliot, Rutherford, Darwin or Park Wood even if they do not live in college accommodation. Students are encouraged to stay engaged with their College Committees throughout their time at the University.
Finances
In the financial year ended 31 July 2013, the University of Kent had a total income (including share of joint ventures) of £201.3 million, grew by 5.8% with an additional £21.4 million of fee income (2011/12 – £190.2 million) and total expenditure of £188.7 million (2011/12 – £175.9 million). Key sources of income included £98.5 million from tuition fees and education contracts (2011/12 – £77.2 million), £48.9 million from Funding Council grants (2011/12 – £62.5 million), £13.4 million from research grants and contracts (2011/12 – £11.4 million) and £1.2 million from endowment and investment income (2011/12 – £1.09 million). During the 2012/13 financial year the University of Kent had a capital expenditure of £28.2 million (2011/12 – £16.1 million).
At year end the University of Kent had endowment assets of £6.3 million (2011/12 – £6.04 million) and total net assets of £175.9 million (2011/12 – £165.1 million).
Coat of arms and logo
The University of Kent's coat of arms was granted by the College of Arms in September 1967. The white horse of Kent is taken from the arms of the County of Kent (and can also be seen on the Flag of Kent). The three Cornish choughs, originally belonging to the arms of Thomas Becket, were taken from the arms of the City of Canterbury. The Crest depicts the West Gate of Canterbury with a symbolic flow of water, presumably the Great Stour, below it. Two golden Bishops' Crosiers in the shape of a St. Andrews Cross are shown in front of it. The supporters – lions with the sterns of golden ships – are taken from the arms of the Cinque Ports.
The Coat of Arms is now formally used only for degree certificates, degree programmes and some merchandise, as a result of the University seeking a consistent identity branding.
Academic profile
Research
Kent is a research-led university with 24 schools and 40 specialist research centres spanning the sciences, technology, medical studies, the social sciences, arts and humanities. In the 2014 Research Assessment Exercise the University of Kent was ranked 40th out of 128 participating institutions in a 'grade point average' league table in The Times Higher Education Supplement (falling from 31st in 2008), 30th in terms of 'Research Power' (rising from 40th in 2008), and 19th in terms of 'Research Intensity' (rising from 49th in 2008). The University had a total research income of £17 million in 2016.
Rankings
For 2020 The Guardian newspaper ranked Kent 65th in the UK, while The Sunday Times Good University Guide 2018 put Kent in 25th place, as does The Independent's Complete University Guide. QS places Kent 46th in the UK and 366th in the world, while Times Higher Education places it 44th in the UK and in the 301-350 group worldwide. In The Sunday Times 10-year (1998–2007) average ranking of British universities based on consistent league table performance, Kent was ranked 48th overall in the UK. In 2015, Kent ranked ahead of 10 Russell Group universities according to the ranking of The Complete University Guide. In research, both The Guardian and The Times newspapers rank Kent 29th, with The Independent rating the university 28th for its overall research activity in 2014.
The Complete University Guide shows that the average tariff points for entry were 353 UCAS points (old tariff) in 2017.
The National Student Survey in 2017 placed Kent joint 20th in the UK, with an overall satisfaction of 90%.
Library
The Templeman Library (named after Geoffrey Templeman, the University's first Vice-Chancellor) contains over a million items in stock including books, journals, videos, DVDs, and archive materials (for example, a full text of The Times from 1785 onwards), yet it is still only half its planned size. It has a materials fund of approximately £1million a year, and adds 12,000 items every year. It is open every day in term time, on a 24/7 basis. It receives 800,000 visits a year, with approximately half a million loans per annum.
The library also houses the British Cartoon Archive, (established 1975) a national collection of, mainly, newspaper cartoons, with over 90,000 images catalogued.
In 2013 work began to extend, refurbish and completely modernise the Templeman Library, including the addition of study space, along with the creation of a new purpose built lecture theatre. Additionally, the Library facade underwent major renovation. This work was completed in 2017, with additional refurbishment work planned for 2018.
Franco-British programme
The bilingual Franco-British double-degree programme combines subjects in one degree and is taught in two countries. The first year is spent at the Institut d'études politiques de Lille (IEP), the second and third years at the University of Kent, the fourth year at the IEP of Lille and the fifth is spent in Canterbury, Brussels or Lille.
The students of the Franco-British double-degree programme receive, at the end of the fourth year, the Bachelor of Arts (BA) degree from the University of Kent, the Diplôme by the IEP of Lille and, at the end of the fifth year, either the Master of Arts (MA) degree in Canterbury or in Brussels or the Master delivered by the IEP of Lille, chosen between 14 parcours de formation by the IEP of Lille.
Student life
The student population is mixed, with around 15,000 undergraduates and 4,000 postgraduates, with approximately 22% of students coming from overseas. Approximately 128 different nationalities are currently represented, and the female to male ratio is 55 women to every 45 men.
Students' Union
The Students' Union, officially known as "Kent Union", is the student representative body for students at the university. It is led by five elected full-time officers (the 'sabbatical team'), a Board of trustees, part-time student officers and 'lay' members of the local community and business selected for their specialist expertise.
The University boasts two Co-Op shops on campus - one in the main campus area, and the other in the Park Wood student village. The two all-purpose food and essentials stores were previously known Essentials and Parkwood Essentials. The Union also operates the Park Wood bar Woody's and a 1,500 capacity nightclub The Venue which, unusually, is located on the central campus. Essentials, the Venue and other shops and Union offices are located in purpose-made buildings completed in 1998. Kent Union also co-ordinates over 200 sports clubs and societies, as well as media outlets, volunteering and charity activities, and provides student welfare services.
Demonstrations
In early March 1970 a General Meeting of the University of Kent at Canterbury Students' Union voted to occupy the Cornwallis Building as part of a national student movement to open personal records to individual student scrutiny. The occupation lasted about two weeks, with a majority vote ending the occupation on 18 March. Approximately 400 students marched out of the Cornwallis Building to present a set of demands that were handed by Union President David Lawrence to the University Registrar Mr Eric Fox. The demands had been drawn up and debated by groups of up to 300 students at a time in meetings and seminars held throughout the occupation.
In the early to mid-1970s, along with other plate-glass universities, the Union had a reputation for revolutionary politics, leading to demands for law changes from some staff trade unionists. It was active in anti-poll tax, anti-student loans and anti-racism campaigns as well as safety campaigns on campus in the late 1980s.
In 2003, ahead of the 'Top Up Fees' vote, the Union took 300 students to take part in the NUS UK Student Demonstrations on three double decker buses. The Union covered all the transport costs to the demonstration. In 2010, ahead of a parliamentary vote on issues concerning raising tuition fees, the Union took part in the national demonstrations. The union heavily subsidised the transport for 500 students to get to London.
Chaplaincy
Whilst the University is secular, there is a chaplaincy consisting of permanent Anglican and Catholic priests and a Pentecostal minister, as well as part-time chaplains from other denominations and faiths. The chaplaincy runs the annual Carol Service that takes place every year in Canterbury Cathedral at the end of Autumn Term.
Student housing
In addition to the student housing in the colleges, the University also has the following student housing:
Darwin Houses, a set of 26 student houses next to Darwin College, opened in 1989
Becket Court, next to Eliot College, opened in 1990
Tyler Court, three blocks of halls of residence. Block A was opened in 1995 mostly for postgraduates; Blocks B and C were completed in 2004 for undergraduates.
Parkwood, a mini student village comprising 262 two-storey houses and a recently built apartment complex, about 10 minutes walk from the main campus. The initial houses were opened in 1980. A large addition to the Parkwood area was completed in 2005, comprising a number of en-suite fitted rooms grouped into four, five and six bedroom flats.
Turing College was officially opened in September 2015; 9 buildings of 3 to 4 floors each.
Student media
CSR 97.4FM
University of Kent and Canterbury Christ Church University, as well as their associated Student Unions, fund Canterbury's only student and community radio station: CSR 97.4FM. The radio station broadcasts from studios at both universities 24 hours a day, with live broadcasting from 7am – 12am. CSR 97.4FM replaced UKC Radio, the original student-run radio station at the University.
InQuire
The University has a student newspaper named InQuire and an online news website InQuire Media (launched in January 2008). The newspaper is published every two weeks and is edited by a group of student volunteers. Content is edited by volunteers and is focused on campus issues and national news that affects students. Funded by Kent Union, the newspaper is subject to moderation before publication.
Kent Television
Kent Television (KTV), founded in 2012, is the volunteer-run television studio at the University.
Notable alumni
References
External links
University of Kent website
University of Kent Students' Union
Educational institutions established in 1965
University
University
1965 establishments in England
Internet mirror services
Universities UK |
1352308 | https://en.wikipedia.org/wiki/Laurel%20Aitken | Laurel Aitken | Lorenzo "Laurel" Aitken (22 April 1927 – 17 July 2005) was an influential Caribbean singer and one of the pioneers of Jamaican ska music. He is often referred to as the "Godfather of Ska".
Career
Born in Cuba of mixed Cuban and Jamaican descent, Aitken and his family settled in Jamaica in 1938. After an early career working for the Jamaican Tourist Board singing mento songs for visitors arriving at Kingston Harbour, he became a popular nightclub entertainer. His first recordings in the late 1950s were mento tunes such as "Nebuchnezer", "Sweet Chariot" (aka the gospel classic "Swing Low, Sweet Chariot") and "Baba Kill Me Goat". Aitken's 1958 single "Boogie in My Bones"/"Little Sheila" was one of the first records produced by Chris Blackwell and the first Jamaican popular music record to be released in the United Kingdom. Other more Jamaican rhythm and blues orientated singles from this period include "Low Down Dirty Girl" and "More Whisky" both produced by Duke Reid.
Aitken moved to Brixton, London, in 1960 and recorded for the Blue Beat label, releasing fifteen singles before returning to Jamaica in 1963. He recorded for Duke Reid, with backing from the Skatalites on tracks such as "Zion" and "Weary Wanderer", before returning to the UK, where he began working with Pama Records. He recorded hits such as "Fire in Mi Wire" and "Landlord and Tenants", which led to a wider recognition outside of Jamaica and the UK. This cemented his position as one of ska's leading artists and earned him the nicknames The Godfather of Ska, and later Boss Skinhead. He gained a loyal following not only among the West Indian community, but also among mods, skinheads and other ska fans. He had hit records in the United Kingdom and other countries in the 1950s through to the 1970s on labels such as Blue Beat, Pama, Trojan, Rio, Dr. Bird, Nu-Beat, Ska-Beat, Hot Lead and Dice. Some of his singles featured B-sides credited to his brother, guitarist Bobby Aitken. Aitken also recorded a few talk-over/deejay tracks under the guise of 'King Horror', such as "Loch Ness Monster", "Dracula, Prince of Darkness" and "The Hole". Aitken settled in Leicester with his wife in 1970. His output slowed in the 1970s and during this period he worked as an entertainer in nightclubs and restaurants in the area including the popular 'Costa Brava Restaurant' in Leicester under his real name Lorenzo. In 1980, with ska enjoying a resurgence in the wake of the 2 Tone movement, Aitken had his only success in the UK Singles Chart with "Rudi Got Married" (No. 60) released on I-Spy Records (the label created and managed by Secret Affair. Aitken's career took in mento/calypso, R&B, ska, rock steady, and reggae, and in the 1990s he even turned his talents to dancehall. He performed occasional concerts almost until his death from a heart attack in 2005. After a long campaign, a blue plaque in his honour was put up at his Leicester home in 2007. Punk band Rancid cover Aitken's "Everybody Suffering" on their 2014 LP Honor Is All We Know
Discography
Albums
The Original Cool Jamaican Ska (1964, LP Compil)
Ska With Laurel (1965, Rio)
Laurel Aitkin Says Fire (1967, Doctor Bird)
Fire (1969)
High Priest of Reggae (1969, Nu-Beat)
The High Priest Of Reggae (1970)
Laurel Aitken Meets Floyd Lloyd and the Potato Five (1987, Gaz's) (with The Potato 5)
Early Days of Blue Beat, Ska and Reggae (1988, Bold Reprive)
True Fact (1988, Rackit) (with The Potato 5)
Ringo The Gringo (1989, Unicorn)
It's Too Late (1989, Unicorn)
Rise and Fall (1989, Unicorn)
Sally Brown (1989, Unicorn)
Ringo the Gringo (1990, Unicorn)
Rasta Man Power (1992, ROIR)
The Blue Beat Years (1996, Moon Ska)
Rocksteady Party (1996, Blue Moon) (with The Potato 5)
The Story So Far (1999, Grover)
Woppi King (1997, Trybute)
The Pama Years (1999, Grover)
The Long Hot Summer (1999, Grover) (Laurel Aitken and The Skatalites)
Clash of The Ska Titans (1999, Moon Ska) (Laurel Aitken versus The Skatalites)
Pioneer of Jamaican Music (2000, Reggae Retro)
Godfather of Ska (2000, Grover)
Jamboree (2001, Grover)
Rudi Got Married (2004, Grover)
En Espanol (2004, Liquidator)
Live at Club Ska (2004, Trojan)
The Pioneer of Jamaican Music (2005, Reggae Retro)
Super Star (2005, Liquidator)
You’ve Got What It Takes/That’s How Strong (My Love Is) (2005 - Mini CD)
The Very Last Concert (2007, Soulove) (CD + DVD)
Singles
"Nebuchanezzar/Sweet Chariot" (1958, Kalypso)
"Low Down Dirty Girl" (1959, Duke Reid)
"Drinkin' Whiskey" (1959, Starlite)
"Boogie Rock" (1960, Blue Beat)
"Jeannie Is Back" (1960, Blue Beat)
"Judgement Day" (1960, Blue Beat)
"Railroad Track" (1960, Blue Beat)
"More Whisky" (1960, Blue Beat)
"Aitken's Boogie" (1960, Kalypso)
"Baba Kill Me Goat" (1960, Kalypso)
"Boogie In My Bones" (1960, Starlite)
"Honey Girl" (1960, Starlite)
"Bar Tender" (aka Hey Bartender) (1961, Blue Beat)
"Bouncing Woman" (1961, Blue Beat)
"Mighty Redeemer" (1961, Blue Beat)
"Please Don't Leave Me" (1961, Blue Beat)
"Mary Lee" (1961, Melodisc)
"Love Me Baby" (1961, Starlite)
"Stars Were Made" (1961 Starlite)
"Brother David" (1962, Blue Beat)
"Lucille" (1962, Blue Beat)
"Sixty Days & Sixty Nights" (1962, Blue Beat)
"Jenny Jenny" (1962, Blue Beat)
"Mabel" (1962, Dice)
"Lion of Judah" (1963, Black Swan)
"The Saint" (1963, Black Swan)
"Zion City" (1963, Blue Beat)
"Little Girl" (1963, Blue Beat)
"Oh Jean" (1963, Dice)
"Sweet Jamaica" (1963, Dice)
"Low Down Dirty Girl" (1963, Duke)
"I Shall Remove" (1963, Island)
"What a Weeping" (1963, Island)
"In My Soul" (1963, Island)
"Adam & Eve" (1963, Rio)
"Mary" (1963, Rio)
"Bad Minded Woman" (1963, Rio)
"Devil or Angel" (1963, Rio)
"Freedom Train" (1963, Rio)
"This Great Day" (1964, Blue Beat)
"West Indian Cricket Test" (1964, JNAC)
"Pick Up Your Bundle" (1964, R&B)
"Yes Indeed" (1964, R&B)
"Bachelor Life" (1964, R&B)
"Leave Me Standing" (1964, Rio)
"John Saw Them Coming" (1964, Rio)
"Rock of Ages" (1964, Rio)
"Jamaica" (1965, Dice)
"We Shall Overcome" (1965, Dice)
"Mary Don't You Weep" (1965, Rio)
"Mary Lou" (1965, Rio)
"One More Time" (1965, Rio)
"Let's Be Lovers" (1965, Rio)
"Clementine" (1966, Blue Beat)
"Don't Break Your Promises" (1966, Rainbow)
"Voodoo Woman" (1966, Rainbow)
"How Can I Forget You" (1966, Rio)
"Baby Don't Do It" (1966, Rio)
"We Shall Overcome" (1966, Rio)
"Clap Your Hands" (1966, Rio)
"Jumbie Jamboree" (1966, Ska-Beat)
"Propaganda" (1966, Ska-Beat)
"Green Banana" (1966, Ska-Beat)
"Rock Steady" (1967, Columbia Blue Beat)
"I'm Still in Love With You Girl" (1967, Columbia Blue Beat)
"Never Hurt You" (1967, Fab)
"Sweet Precious Love" (1967, Rainbow)
"Mr. Lee" (1968, Dr. Bird)
"La La La (Means I Love You)" (1968, Dr. Bird)
"For Sentimental Reasons" (1968, Fab)
"Fire in Your Wire" (1969, Dr. Bird)
"Rice & Peas" (1969, Dr. Bird)
"Reggae Prayer" (1969, Dr. Bird)
"The Rise & Fall of Laurel Aitken" (1969, Dr. Bird)
"Haile Haile" (1969, Dr. Bird)
"Carolina" (1969, Dr. Bird)
"Think Me No Know" (1969, Junior)
"Woppi King" (1969, Nu-Beat)
"Suffering Still" (1969, Nu-Beat)
"Haile Selassie" (1969, Nu-Beat)
"Lawd Doctor" (1969, Nu-Beat)
"Run Powell Run" (1969, Nu-Beat)
"Save The Last Dance" (1969, Nu-Beat)
"Don't Be Cruel" (1969, Nu-Beat)
"Shoo Be Doo" (1969, Nu-Beat)
"Landlords & Tenants" (1969, Nu-Beat)
"Jesse James" (1969, Nu-Beat)
"Pussy Price Gone Up" (1969, Nu-Beat)
"Skinhead Train" (1969, Nu-Beat)
"Donkey Man" (1969, Unity)
"Pussy Got Thirteen Life" (1970, Ackee)
"Sin Pon You" (1970, Ackee)
"Moon Rock" (1970, Bamboo)
"Skinhead Invasion" (1970, Nu-Beat)
"I've Got Your Love" (1970, Nu-Beat)
"Scandal in Brixton Market" (1970, Nu-Beat)
"Nobody But Me" (1970, Nu-Beat)
"I'll Never Love Any Girl" (1970, Nu-Beat)
"Reggae Popcorn" (1970, Nu-Beat)
"Baby I Need Your Loving" (1970, Nu-Beat)
"Sex Machine" (1970, Nu-Beat)
"Pachanga" (1970, Nu-Beat)
"Mary's Boy Child" (1970, Pama)
"Why Can't I Touch You" (1970, Pama Supreme)
"Dancing with My Baby" (1971, Big Shot)
"If It's Hell Below" (1971, Black Swan)
"True Love" (1971, Nu-Beat)
"I Can't Stop Loving You" (1971, Nu-Beat)
"It's Too Late" (1971, Trojan)
"Take Me in Your Arms" (1972, Big Shot)
"Africa Arise" (1972, Camel)
"Reggae Popcorn" (1972, Pama)
"Never Be Anyone Else" 1974 (Hot Lead Records)
"Fattie Bum Bum" (1975, Punch)
"For Ever And Ever" (1977, DiP)
"Rudi Got Married" (1980, I Spy) UK # 60
"Big Fat Man" (1980, I Spy)
"Mad About You" (1986, Gaz's)
"Everybody Ska" (1989, Unicorn)
"Skinhead" (1999, Grover)
Videos/DVDs
Live at Gaz's Rockin' Blues (1989, Unicorn) (VHS)
Laurel Aitken And Friends – Live at Club Ska (2005, Cherry Red) (DVD)
References
Further reading
Barrow, Steve & Dalton, Peter: The Rough Guide To Reggae 3rd edn., Rough Guides, 2004
External links
Laurel Aitken Biography at Grover Records
Laurel Aitken Discography at Discogs
ROIR artists
1927 births
2005 deaths
Jamaican people of Cuban descent
Skinhead
Jamaican ska musicians
Island Records artists
Trojan Records artists
Jamaican expatriates in England
Blue Beat Records artists |
26176175 | https://en.wikipedia.org/wiki/MPDS4 | MPDS4 | MPDS, the MEDUSA Plant Design System (since 2006 MPDS4) is a suite of plant engineering applications for 2D/3D layout, design and modelling of process plants, factories or installations. The system's history is closely tied to the very beginnings of mainstream CAD and the research culture fostered by Cambridge University and the UK government as well as the resulting "Cambridge Phenomenon" MPDS was originally developed for 3D plant design and layout and piping design. Today the software includes modules for 2D/3D factory layout, process and instrumentation diagrams (P&ID), mechanical handling systems design, steel design, ducting (HVAC) design, electrical design and hangers and supports Design. The latest version, MPDS4 5.2.1, was released for Microsoft Windows and Sun Solaris in February 2014.
History
MPDS’ history is tied in with the Computer-Aided Design Centre (or CADCentre) which was created in Cambridge in 1967 by the UK Government to carry out CAD research.
Famous British computer scientist Dr. Dick Newell worked there on a file-based macro language driven 3D plant design system called PDMS (Plant Design Management System). Together with colleague Tom Sancha he left the CADCentre in 1977 to form a company called Cambridge Interactive Systems or CIS and primarily concentrated on 2D CAD. CIS had developed an electrical cabling solution initially called CABLOS, which was first purchased by Dowty Engineering in about 1979. Another early adopter was BMW, which used the system for car wiring diagrams. CABLOS soon became known and sold as the MEDUSA drafting system under CIS. The proprietary programming language with which MEDUSA version 1 was developed was known as baCIS 1. Around this time, the company also began developing its own 3D modelling kernel for MEDUSA.
Around 1980, CIS partnered with Prime Computer, a U.S.-based computer hardware provider. Prime had an option on the MEDUSA source code should CIS ever fail. In 1983 the U.S. CAD company Computervision purchased CIS.
Computervision/CIS started developing the MEDUSA Plant Design System (MPDS), the first plant design software based on a relational database. Developers knew from their prior experience with a file-based macro-language driven system that the next generation plant design system had to be built on a relational database and with a much more powerful programming language to handle large data volumes, complexity and relationships. Whereas mechanical CAD engineers were developing machinery with a few hundred or maybe a thousand components, plant design engineers typically needed to deal with hundreds of thousands of components. To facilitate this work, the baCIS 2 interpretive language and the MDB relational database were developed for MPDS. Existing MEDUSA technology was used to create the 2D and 3D geometric data required for plant layouts. The creation of this data-centric concept separated the 3D visualisation of a plant from the underlying database and allowed engineers to plan and design installations with very large volumes of data, and produce all the required 2D drawings from 3D plant designs.
The first MPDS sales date to around 1988 to NEI Parsons (Northern Engineering Industries later became part of the Rolls Royce Industrial Power Group). Courtaulds Engineering, which had been using MEDUSA since 1983, was also an early MPDS adopter.
In the same year Prime Computer merged with Computervision and adopted the name Computervision to concentrate on software, due to declining hardware sales. MEDUSA continued to be developed throughout the 1990s in Cambridge, UK at Computervision's R&D centre at Harston Mill.
In 1993, the next generation of MEDUSA and MPDS was released. What would have been version 13 was released as MEDUSA NG and MPDS NG. They signified the shift from tablet-driven menus to a graphical user interface, although tablets could still be used on that release.
In 1994 Computervision closed its R&D facility in Cambridge, moving to Boston, Massachusetts. As a result, five former Computervision staff members and MEDUSA experts formed the company Quintic Ltd in Cambridge, which continued to provide MEDUSA and MPDS development and consultancy services to Computervision and the MEDUSA customer base. Work included the porting of MEDUSA NG to Microsoft Windows.
Note: The above doesn't quite work. According to Companies House, Quintic Ltd was not actually formed until 27th February 2013, 19 years after the Cambridge R&D facility closed.
In 1998 the American CAD company Parametric Technology Corporation (PTC) acquired Computervision. The development partnership between Quintic and Computervision transferred to PTC.
One of the largest MEDUSA user bases was in the heavily manufacturing-driven economy of Germany. CAD Schroer, a company founded in 1986 by Michael Schroer as a provider of CAD-based engineering services, became a MEDUSA vendor in 1988, having used the software extensively on client projects. The company, which also provided add-on modules and customisations, had established a development relationship first with Computervision, then with PTC.
In 2001, CAD Schroer acquired all rights to MEDUSA and MPDS from PTC. The development partnership between Quintic and CAD Schroer strengthened, as the two companies worked to create a Fourth Generation release of MEDUSA and MPDS. This included a complete overhaul of the functionality; the development of a graphical user interface (GUI) based on the Qt (framework) technology, the development of data exchange mechanisms and interfaces with third party systems, and the porting to the Linux open-source operating system.
In 2005, CAD Schroer acquired its development partner Quintic Ltd, gaining CAD development expertise that dates back to the days of CIS and Prime. CAD Schroer UK remains a software development centre in Cambridge, whose staff continue to develop and support MEDUSA4 and MPDS4 in partnership with CAD development experts at CAD Schroer GmbH in Moers, Germany.
In 2006, CAD Schroer released the Fourth Generation of the MPDS plant design system, MPDS4. Since then the company has continued to develop and extend the functionality of the product suite, including the development of a factory layout module for designing 3D factories based on 2D drawings.
Technical Description
Database Architecture
Multi-user engineering design in MPDS is relational database-driven. The project database can be deployed as a central design database or as a project-specific database and contains component catalogs with assigned component attributes. The database drives the design graphics as well as user administration and can be integrated with other database-driven systems, such as Enterprise Resource Planning (ERP) system.
3D Graphics
MPDS combines use of the HOOPS 3D Graphics System with the relational database, whose catalog component attributes define the visual representation of each component. Because 3D plant models can be generated from catalog-based drawing routines, the demand on computer memory and resources is limited. Plants of hundreds of thousands of components can be designed, edited and exported to a compact .HSF format for external visual review. The system support varying display detail levels, allowing designers to visualise components either in great detail, visually simplified, or merely as an outline object in space, required for effective clash detection.
User Administration
Central user administration and access controls in MPDS allow Administrators to set up a variety of users who can work on a plant design simultaneously, and who can have different access privileges - limited, for example, to certain design disciplines or to certain areas within a plant. This is supported by integrated version and change management.
Quality Assurance
MPDS4 includes hard and soft clash detection, which can be applied to a whole project, to separate systems or between selected components. Consistency checking tools allow users to check work against specific design rules. Results can be passed to customisable reports, and components used in a design are automatically included in parts lists.
Modules
The MPDS4 Assembly Manager is at the core of the plant design and factory layout system, and can be extended with several user extensible and customizable modules covering plant engineering disciplines.
PIPING DESIGN MPDS4 PIPING DESIGN is an industrial piping design software add-on with extensive libraries of catalogue components to a variety of industrial standards, including DIN, ANSI and BS. Its routing tools are used for loading, positioning and replacing components, manually or automatically. The module supports P&ID-driven piping design and is pipe specification driven, so that only components from the same specification can be connected. MPDS4 PIPING DESIGN is fully integrated with ISOGEN (from ALIAS Piping Solutions) for automated piping isometric production.
P&ID P&ID is an application for creating intelligent process and instrumentation diagrams; for data extraction, and for use of data to create and cross-check the 3D world. Design can be database-driven and based on existing parts lists. P&ID diagrams can be used to form the basis of 2D layouts and 3D designs, with the ability to cross-check P&IDs and automatically load P&ID components not yet included in a 3D plant design in the appropriate position.
FACTORY LAYOUT MPDS4 FACTORY LAYOUT is a hybrid 2D/3D design environment where 2D layouts or drawings are used as the basis of 3D designs. Height attributes added to 2D building plans are used to produce 3D buildings. Symbols used in a 2D layout are linked to 3D model files which are automatically generated when users switch to 3D. Other components or product specials can be modelled using a sheet-based modelling approach.
MATERIALS HANDLING MPDS4 MECHANICAL HANDLING is a design application with a series of configurable catalogs of mechanical materials handling components, which can be physically interconnected to form part of an industrial process. It includes catalogs of conveyor belts, cranes, fork lift trucks, industrial racking, and robots, and allows installation designers to select, lay out, configure, visualize and add intelligence to process machinery in a plant. The module also supports the controlled creation of product specials for materials handling.
STEEL DESIGN MPDS4 STEEL DESIGN is a module for constructing steel frames for buildings and equipment support. MPDS4 STEEL DESIGN includes catalogs of steel sections for many worldwide steel standards and allows users to design steel members, plates, stairs and ladders.
DUCTING DESIGN MPDS4 DUCTING DESIGN is a software module for routing HVAC, of differing sections, into a plant or factory. MPDS4 DUCTING DESIGN includes catalogs with different types of ducts, valves, fans and other supporting components.
ELECTRICAL DESIGN MPDS4 ELECTRICAL DESIGN is a design module for routing or connecting electrical systems with components throughout a plant, ship, or factory. The user extensible and customizable catalogs contain many different types of electrical and control systems, as well as cable trays, cable ducts and links, and other supporting components. Auto routing functionality finds the shortest cable route between two designated points. By adding a KVA (kilovolt-ampere) rating to selected components, users can analyze the required power rating of an entire network of connected instances and cables.
HANGERS & SUPPORTS DESIGN MPDS4 HANGERS & SUPPORTS DESIGN is a design application for accurately modelling supports between pipes and steelwork in a plant or installation.
ENGINEERING REVIEW MPDS4 ENGINEERING REVIEW is an application for conducting realistic engineering design reviews within the MPDS4 plant environment, visually presenting all of the plant project data. The module contains functionality for sectioning and setting transparency and allows users to define and generate movie-like walkthroughs of an installation.
REVIEW MPDS4 REVIEW is an external review application for users who do not have the MPDS4 plant design system installed. MPDS4 can generate .HSF (Hoops format) files of a plant design, which can be e-mailed to users of the MPDS4 REVIEW tool. They use the software to conduct interactive design reviews and walk-throughs, or present designs to third parties.
See also
PDMS
References
External links
CAD Schroer Web Pages
3D graphics software
Computer-aided design software
Computer-aided manufacturing software
Computer-aided design software for Linux
Science and technology in Cambridgeshire |
1563634 | https://en.wikipedia.org/wiki/Adobe%20Type%20Manager | Adobe Type Manager | Adobe Type Manager (ATM) was the name of a family of computer programs created and marketed by Adobe Systems for use with their PostScript Type 1 fonts. The last release was Adobe ATM Light 4.1.2, per Adobe's FTP (at the time).
Modern operating systems such as Windows and MacOS have built-in support for PostScript fonts, eliminating the need for Adobe's 3rd party utility.
Apple Macintosh
The original ATM was created for the Apple Macintosh computer platform to scale PostScript Type 1 fonts for the computer monitor, and for printing to non-PostScript printers. Mac Type 1 fonts come with screen fonts set to display at certain point sizes only. In Macintosh operating systems prior to Mac OS X, Type 1 fonts set at other sizes would appear jagged on the monitor. ATM allowed Type 1 fonts to appear smooth at any point size, and to print well to non-PostScript devices.
Around 1996, Adobe expanded ATM into a font-management program called ATM Deluxe; the original ATM was renamed ATM Light. ATM Deluxe performed the same font-smoothing function as ATM Light, but performed a variety of other functions: activation and deactivation of fonts; creating sets of fonts that could be activated or deactivated simultaneously; viewing and printing font samples; and scanning for duplicate fonts, font format conflicts, and PostScript fonts missing screen or printer files.
Around 2001, with Apple's Mac OS X, support for Type 1 fonts was built into the operating system using ATM Light code contributed by Adobe. ATM for Mac was then no longer necessary for font imaging or printing.
Adobe discontinued development of ATM Deluxe for Macintosh after Apple moved to Mac OS X. Adobe ceased selling ATM Deluxe in 2005. ATM Deluxe does not work reliably under OS X (even under Classic), however, ATM Light is still helpful to Type 1 font users under Classic.
Microsoft Windows
Adobe ported these products to the Microsoft Windows operating system platform, where they managed font display by patching into Windows (3.0, 3.1x, 95, 98, Me) at a very low level. The design of Windows NT made this kind of patching unviable, and Microsoft initially responded by allowing Type 1 fonts to be converted to TrueType on install, but in Windows NT 4.0, Microsoft added "font driver" support to allow ATM to provide Type 1 support (and in theory other font drivers for other types).
As with ATM Light for Macintosh, Adobe licensed to Microsoft the core code, which was integrated into Windows 2000 and Windows XP, making ATM Light for Windows obsolete, except for the special case of support for "multiple master" fonts, which Microsoft did not include in Windows, and for which ATM Lite still acts as a font driver.
ATM Light is still available for Windows users, but ATM Deluxe is no longer developed or sold.
Users of ATM 4.0 (Light or Deluxe) on Windows 95/98/ME who upgrade to Windows 2000/XP may encounter problems, and it is vital not to install version 4.0 into Windows 2000 or later; affected users are encouraged to visit the Adobe web site for technical information and patches. Version 4.1.2 is fully compatible with Windows 2000 and XP (It will run on XP 64-bit, but because the installer doesn't work it must be first installed on 32-bit XP and then copied over to 64-bit XP).
ATM installed on XP may prevent a system from entering standby - the error message indicates keyboard driver needs updating. Uninstalling ATM corrects the issue.
Windows Vista is incompatible with both ATM Light and ATM Deluxe. Windows Vista can use Adobe Type 1 fonts natively, making add-ons like ATM unnecessary.
The latest version of ATM for Windows 3.1 is 3.02. There was no ATM Deluxe for Windows versions prior to 95.
Acrobat Reader, starting with version 2.1, installs a version of ATM for its own use, referred to as a Portable Font Server, but there is no control panel or other user interface for it. It is therefore unsuitable for the tasks which most people need to install ATM for.
Other operating systems
Adobe Type Manager was also made available for a select few PC operating systems available during the early 1990s, including NeXTSTEP, DESQview, and OS/2. Unlike the Windows and Mac versions, these versions of ATM were bundled with the OS itself.
There were also ATM versions for extremely popular DOS applications, the most notable being WordPerfect 5.0 and 5.1. This incarnation of ATM, made by LaserTools was named PrimeType in the United States and Adobe Type Manager for WordPerfect elsewhere. An alternative to ATM for WordPerfect 5.1 was infiniType Plus by SoftMaker. WordPerfect 6.0 and newer included its own Type 1 system, making third-party solutions obsolete.
Competing products
Bitstream FaceLift
SoftMaker infiniTyp
Linotype FontExplorer X
Extensis Suitcase Fusion
Bohemian Coding FontCase
See also
Adobe Type
References
External links
Adobe Type Manager official website
List of Adobe product releases
Using Adobe Type Manager with Windows 3.0
Font packages for Windows - Bitstream's FaceLift and Adobe Systems' Adobe Type Manager
Classic Mac OS software
Type Manager
Discontinued Adobe software
Font managers |
20762 | https://en.wikipedia.org/wiki/Michael%20Crichton | Michael Crichton | John Michael Crichton (; October 23, 1942 – November 4, 2008) was an American author and filmmaker. His books have sold over 200 million copies worldwide, and over a dozen have been adapted into films. His literary works are usually within the science fiction, techno-thriller, and medical fiction genres, and heavily feature technology. His novels often explore technology and failures of human interaction with it, especially resulting in catastrophes with biotechnology. Many of his novels have medical or scientific underpinnings, reflecting his medical training and scientific background.
Crichton received an M.D. from Harvard Medical School in 1969 but did not practice medicine, choosing to focus on his writing instead. Initially writing under a pseudonym, he eventually wrote 26 novels, including The Andromeda Strain (1969), The Terminal Man (1972), The Great Train Robbery (1975), Congo (1980), Sphere (1987), Jurassic Park (1990), Rising Sun (1992), Disclosure (1994), The Lost World (1995), Airframe (1996), Timeline (1999), Prey (2002), State of Fear (2004), and Next (2006). Several novels, in various states of completion, were published after his death in 2008.
Crichton was also involved in the film and television industry. In 1973, he wrote and directed Westworld, the first film to utilize 2D computer-generated imagery. He also directed Coma (1978), The First Great Train Robbery (1979), Looker (1981), and Runaway (1984). He was the creator of the television series ER (1994–2009), and several of his novels were adapted into films, most notably the Jurassic Park franchise.
He held a contrarian position on various scientific issues such as climate change, the health risks of secondhand smoke, and the search for alien life. Crichton himself framed this contrarianism as a practical skepticism of "consensus-based" science, arguing that over-reliance on statistical models creates the potential for bias, especially in the face of political and social pressures such as the desire to avert nuclear war.
Life
Early life
John Michael Crichton was born on October 23, 1942, in Chicago, Illinois, to John Henderson Crichton, a journalist, and Zula Miller Crichton, a homemaker. He was raised on Long Island, in Roslyn, New York, and showed a keen interest in writing from a young age; at 14, he had an article about a trip he took to Sunset Crater published in The New York Times.
Crichton later recalled, "Roslyn was another world. Looking back, it's remarkable what wasn't going on. There was no terror. No fear of children being abused. No fear of random murder. No drug use we knew about. I walked to school. I rode my bike for miles and miles, to the movie on Main Street and piano lessons and the like. Kids had freedom. It wasn't such a dangerous world... We studied our butts off, and we got a tremendously good education there."
Crichton had always planned on becoming a writer and began his studies at Harvard College in 1960. During his undergraduate study in literature, he conducted an experiment to expose a professor who he believed was giving him abnormally low marks and criticizing his literary style. Informing another professor of his suspicions, Crichton submitted an essay by George Orwell under his own name. The paper was returned by his unwitting professor with a mark of "B−". He later said, "Now Orwell was a wonderful writer, and if a B-minus was all he could get, I thought I'd better drop English as my major." His differences with the English department led Crichton to switch his undergraduate concentration. He obtained his bachelor's degree in biological anthropology summa cum laude in 1964 and was initiated into the Phi Beta Kappa Society. He received a Henry Russell Shaw Traveling Fellowship from 1964 to 1965 and was a visiting lecturer in Anthropology at the University of Cambridge in the United Kingdom in 1965. Crichton later enrolled at Harvard Medical School. Crichton later said "about two weeks into medical school I realized I hated it. This isn't unusual since everyone hates medical school – even happy, practicing physicians."
Pseudonymous novels (1965–1968)
In 1965, while at Harvard Medical School, Crichton wrote a novel, Odds On. "I wrote for furniture and groceries", he said later. Odds On is a 215-page paperback novel which describes an attempted robbery in an isolated hotel on Costa Brava. The robbery is planned scientifically with the help of a critical path analysis computer program, but unforeseen events get in the way. Crichton submitted it to Doubleday, where a reader liked it but felt it was not for the company. Doubleday passed it on to New American Library, which published it in 1966. Crichton used the pen name John Lange because he planned to become a doctor and did not want his patients to worry he would use them for his plots. The name came from fairy tale writer Andrew Lang. Crichton added an "e" to the surname and substituted his own real first name, John, for Andrew. The novel was successful enough to lead to a series of John Lange novels. Film rights were sold in 1969, but no movie resulted.
The second Lange novel, Scratch One (1967), relates the story of Roger Carr, a handsome, charming, privileged man who practices law, more as a means to support his playboy lifestyle than a career. Carr is sent to Nice, France, where he has notable political connections, but is mistaken for an assassin and finds his life in jeopardy. Crichton wrote the book while traveling through Europe on a travel fellowship. He visited the Cannes Film Festival and Monaco Grand Prix, and then decided, "any idiot should be able to write a potboiler set in Cannes and Monaco", and wrote it in eleven days. He later described the book as "no good". His third John Lange novel, Easy Go (1968), is the story of Harold Barnaby, a brilliant Egyptologist who discovers a concealed message while translating hieroglyphics informing him of an unnamed pharaoh whose tomb is yet to be discovered. Crichton later said the book earned him $1,500. Crichton later said, "My feeling about the Lange books is that my competition is in-flight movies. One can read the books in an hour and a half, and be more satisfactorily amused than watching Doris Day. I write them fast and the reader reads them fast and I get things off my back."
Crichton's fourth novel was A Case of Need (1968), a medical thriller. The novel had a different tone to the Lange books; accordingly, Crichton used the pen name "Jeffrey Hudson", based on Sir Jeffrey Hudson, a 17th-century dwarf in the court of queen consort Henrietta Maria of England. The novel would prove a turning point in Crichton's future novels, in which technology is important in the subject matter, although this novel was as much about medical practice. The novel earned him an Edgar Award in 1969. He intended to use the "Jeffrey Hudson" for other medical novels but ended up using it only once. It would later be adapted into the film The Carey Treatment (1972).
Pseudonyms
John Lange
Jeffery Hudson
Michael Douglas
Early novels and screenplays (1969–1974)
Crichton says after he finished his third year of medical school "I stopped believing that one day I'd love it and realised that what I loved was writing." He began publishing book reviews under his name. In 1969, Crichton wrote a review for The New Republic (as J. Michael Crichton), critiquing Slaughterhouse-Five by Kurt Vonnegut. He also continued to write Lange novels: Zero Cool (1969), dealt with an American radiologist on vacation in Spain who is caught in a murderous crossfire between rival gangs seeking a precious artifact. The Venom Business (1969) relates the story of a smuggler who uses his exceptional skill as a snake handler to his advantage by importing snakes to be used by drug companies and universities for medical research.
The first novel that was published under Crichton's name was The Andromeda Strain (1969), which would prove to be the most important novel of his career and establish him as a bestselling author. The novel documented the efforts of a team of scientists investigating a deadly extraterrestrial microorganism that fatally clots human blood, causing death within two minutes. Crichton was inspired to write it after reading The IPCRESS File by Len Deighton while studying in England. Crichton says he was "terrifically impressed" by the book – "a lot of Andromeda is traceable to Ipcress in terms of trying to create an imaginary world using recognizable techniques and real people." He wrote the novel over three years. The novel became an instant hit, and film rights were sold for $250,000. It was adapted into a 1971 film by director Robert Wise.
During his clinical rotations at the Boston City Hospital, Crichton grew disenchanted with the culture there, which appeared to emphasize the interests and reputations of doctors over the interests of patients. He graduated from Harvard, obtaining an MD in 1969, and undertook a post-doctoral fellowship study at the Salk Institute for Biological Studies in La Jolla, California, from 1969 to 1970. He never obtained a license to practice medicine, devoting himself to his writing career instead. Reflecting on his career in medicine years later, Crichton concluded that patients too often shunned responsibility for their own health, relying on doctors as miracle workers rather than advisors. He experimented with astral projection, aura viewing, and clairvoyance, coming to believe that these included real phenomena that scientists had too eagerly dismissed as paranormal.
Three more Crichton books under pseudonyms were published in 1970. Two were Lange novels, Drug of Choice and Grave Descend. Grave Descend earned him an Edgar Award nomination the following year. There was also Dealing: or the Berkeley-to-Boston Forty-Brick Lost-Bag Blues written with his younger brother Douglas Crichton. Dealing was written under the pen name "Michael Douglas", using their first names. Crichton wrote it "completely from beginning to end". Then his brother rewrote it from beginning to end, and then Crichton rewrote it again. This novel was made into a movie in 1972. Around this time Crichton also wrote and sold an original film script, Morton's Run. He also wrote the screenplay Lucifer Harkness in Darkness.
Aside from fiction, Crichton wrote several other books based on medical or scientific themes, often based upon his own observations in his field of expertise. In 1970, he published Five Patients, which recounts his experiences of hospital practices in the late 1960s at Massachusetts General Hospital in Boston. The book follows each of five patients through their hospital experience and the context of their treatment, revealing inadequacies in the hospital institution at the time. The book relates the experiences of Ralph Orlando, a construction worker seriously injured in a scaffold collapse; John O'Connor, a middle-aged dispatcher suffering from fever that has reduced him to a delirious wreck; Peter Luchesi, a young man who severs his hand in an accident; Sylvia Thompson, an airline passenger who suffers chest pains; and Edith Murphy, a mother of three who is diagnosed with a life-threatening disease. In Five Patients, Crichton examines a brief history of medicine up to 1969 to help place hospital culture and practice into context, and addresses the costs and politics of American healthcare. In 1974, he wrote a pilot script for a medical series, "24 Hours", based on his book Five Patients, however, networks were not enthusiastic.
As a personal friend of the artist Jasper Johns, Crichton compiled many of his works in a coffee table book, published as Jasper Johns. It was originally published in 1970 by Harry N. Abrams, Inc. in association with the Whitney Museum of American Art and again in January 1977, with a second revised edition published in 1994. The psychiatrist Janet Ross owned a copy of the painting Numbers by Jasper Johns in Crichton's later novel The Terminal Man. The technophobic antagonist of the story found it odd that a person would paint numbers as they were inorganic.
In 1972, Crichton published his last novel as John Lange: Binary, relates the story of a villainous middle-class businessman, who attempts to assassinate the President of the United States by stealing an army shipment of the two precursor chemicals that form a deadly nerve agent.
The Terminal Man (1972), is about a psychomotor epileptic sufferer, Harry Benson, who, in regularly suffering seizures followed by blackouts, conducts himself inappropriately during seizures, waking up hours later with no knowledge of what he has done. Believed to be psychotic, he is investigated and electrodes are implanted in his brain. The book continued the preoccupation in Crichton's novels with machine-human interaction and technology. The novel was adapted into a 1974 film directed by Mike Hodges and starring George Segal. Crichton was hired to adapt his novel The Terminal Man into a script by Warner Bros. The studio felt he had departed from the source material too much and had another writer adapt it for the 1974 film.
ABC TV wanted to buy the film rights to Crichton's novel Binary. The author agreed on the provision that he could direct the film. ABC agreed provided someone other than Crichton wrote the script. The result, Pursuit (1972) was a ratings success. Crichton then wrote and directed the 1973 science fiction western-thriller film Westworld about robots that run amok, which was his feature film directorial debut. It was the first feature film using 2D computer-generated imagery (CGI). The producer of Westworld hired Crichton to write an original script, which became the erotic thriller Extreme Close-Up (1973). Directed by Jeannot Szwarc, the movie disappointed Crichton.
Period novels and directing (1975–1988)
In 1975, Crichton wrote The Great Train Robbery, which would become a bestseller. The novel is a recreation of the Great Gold Robbery of 1855, a massive gold heist, which takes place on a train traveling through Victorian era England. A considerable portion of the book was set in London. Crichton had become aware of the story when lecturing at Cambridge University. He later read the transcripts of the court trial and started researching the historical period.
In 1976, Crichton published Eaters of the Dead, a novel about a 10th-century Muslim who travels with a group of Vikings to their settlement. Eaters of the Dead is narrated as a scientific commentary on an old manuscript and was inspired by two sources. The first three chapters retell Ahmad ibn Fadlan's personal account of his journey north and his experiences in encountering the Rus', a Varangian tribe, whilst the remainder is based upon the story of Beowulf, culminating in battles with the 'mist-monsters', or 'wendol', a relict group of Neanderthals.
Crichton wrote and directed the suspense film Coma (1978), adapted from the 1977 novel of the same name by Robin Cook, a friend of his. There are other similarities in terms of genre and the fact that both Cook and Crichton had medical degrees, were of similar age, and wrote about similar subjects. The film was a popular success. Crichton then wrote and directed an adaptation of his own book, The Great Train Robbery (1978), starring Sean Connery and Donald Sutherland. The film would go on to be nominated for Best Cinematography Award by the British Society of Cinematographers, also garnering an Edgar Allan Poe Award for Best Motion Picture by the Mystery Writers Association of America.
In 1979 it was announced that Crichton would direct a movie version of his novel Eaters of the Dead for the newly formed Orion Pictures. This did not occur. Crichton pitched the idea of a modern day King Solomon's Mines to 20th Century Fox who paid him $1.5 million for the film rights to the novel, a screenplay and directorial fee for the movie, before a word had been written. He had never worked that way before, usually writing the book then selling it. He eventually managed to finish the book, and Congo became a best seller. Crichton did the screenplay for Congo after he wrote and directed Looker (1981). Looker was a financial disappointment. Crichton came close to directing a film of Congo with Sean Connery, but the film did not happen. Eventually a film version was made in 1995 by Frank Marshall.
In 1984, Telarium released a graphic adventure based on Congo. Because Crichton had sold all adaptation rights to the novel, he set the game, named Amazon, in South America, and Amy the gorilla became Paco the parrot. That year Crichton also wrote and directed Runaway (1984), a police thriller set in the near future which was a box office disappointment.
Crichton had begun writing Sphere in 1967 as a companion piece to The Andromeda Strain. His initial storyline began with American scientists discovering a 300-year-old spaceship underwater with stenciled markings in English. However, Crichton later realized that he "didn't know where to go with it" and put off completing the book until a later date. The novel was published in 1987. It relates the story of psychologist Norman Johnson, who is required by the U.S. Navy to join a team of scientists assembled by the U.S. Government to examine an enormous alien spacecraft discovered on the bed of the Pacific Ocean, and believed to have been there for over 300 years. The novel begins as a science fiction story, but rapidly changes into a psychological thriller, ultimately exploring the nature of the human imagination. The novel was adapted into the 1998 film directed by Barry Levinson and starring Dustin Hoffman.
Crichton worked as a director only on Physical Evidence (1989), a thriller originally conceived as a sequel to Jagged Edge.
In 1988, Crichton was a visiting writer at the Massachusetts Institute of Technology.
A book of autobiographical writings, Travels was published in 1988.
Commercial success and collaboration with Steven Spielberg (1989–1999)
In 1990, Crichton published the novel Jurassic Park. Crichton utilized the presentation of "fiction as fact", used in his previous novels, Eaters of the Dead and The Andromeda Strain. In addition, chaos theory and its philosophical implications are used to explain the collapse of an amusement park in a "biological preserve" on Isla Nublar, a fictional island to the west of Costa Rica. The novel began as a screenplay Crichton wrote in 1983, about a graduate student who recreates a dinosaur. Eventually, given his reasoning that genetic research is expensive and "there is no pressing need to create a dinosaur", Crichton concluded that it would emerge from a "desire to entertain", leading to a wildlife park of extinct animals. Originally, the story was told from the point of view of a child, but Crichton changed it as everyone who read the draft felt it would be better if told by an adult.
Crichton originally had conceived a screenplay about a graduate student who recreates a dinosaur, but decided to put off exploring his fascination with dinosaurs and cloning until he began writing the novel. Steven Spielberg learned of the novel in October 1989 while he and Crichton were discussing a screenplay that would become the television series ER. Before the book was published, Crichton demanded a non-negotiable fee of $1.5 million as well as a substantial percentage of the gross. Warner Bros. and Tim Burton, Sony Pictures Entertainment and Richard Donner, and 20th Century Fox and Joe Dante bid for the rights, but Universal eventually acquired the rights in May 1990 for Spielberg. Universal paid Crichton a further $500,000 to adapt his own novel, which he had completed by the time Spielberg was filming Hook. Crichton noted that, because the book was "fairly long", his script only had about 10% to 20% of the novel's content. The film, directed by Spielberg, was released in 1993.
In 1992, Crichton published the novel Rising Sun, an international bestselling crime thriller about a murder in the Los Angeles headquarters of Nakamoto, a fictional Japanese corporation. The book was adapted into the 1993 film directed by Philip Kaufman and starring Sean Connery and Wesley Snipes, released the same year as the adaptation of Jurassic Park.
His next novel, Disclosure, published in 1994, addresses the theme of sexual harassment previously explored in his 1972 Binary. Unlike that novel however, Crichton centers on sexual politics in the workplace, emphasizing an array of paradoxes in traditional gender functions by featuring a male protagonist who is being sexually harassed by a female executive. As a result, the book has been criticized harshly by feminist commentators and accused of anti-feminism. Crichton, anticipating this response, offered a rebuttal at the close of the novel which states that a "role-reversal" story uncovers aspects of the subject that would not be seen as easily with a female protagonist. The novel was made into a film the same year, directed by Barry Levinson and starring Michael Douglas and Demi Moore.
Crichton was the creator and an executive producer of the television drama ER based on his 1974 pilot script 24 Hours. Spielberg helped develop the show, serving as an executive producer on season one and offering advice (he insisted on Julianna Margulies becoming a regular, for example). It was also through Spielberg's Amblin Entertainment that John Wells was contacted to be the show's executive producer.
Crichton then published The Lost World in 1995 as the sequel to Jurassic Park. The title was a reference to Arthur Conan Doyle's The Lost World (1912). It was made into the 1997 film two years later, again directed by Spielberg. In March 1994, Crichton said there would probably be a sequel novel as well as a film adaptation, stating that he had an idea for the novel's story.
Then, in 1996, Crichton published Airframe, an aero-techno-thriller. The book continued Crichton's overall theme of the failure of humans in human-machine interaction, given that the plane worked perfectly and the accident would not have occurred had the pilot reacted properly.
He also wrote Twister (1996) with Anne-Marie Martin, his wife at the time.
In 1999, Crichton published Timeline, a science fiction novel in which experts time travel back to the medieval period. The novel, which continued Crichton's long history of combining technical details and action in his books, addresses quantum physics and time travel directly and received a warm welcome from medieval scholars, who praised his depiction of the challenges in studying the Middle Ages. In 1999, Crichton founded Timeline Computer Entertainment with David Smith. Despite signing a multi-title publishing deal with Eidos Interactive, only one game was ever published, Timeline. Released by Eidos Interactive on November 10, 2000, for the PC, the game received negative reviews. A 2003 film based on the book was directed by Richard Donner and starring Paul Walker, Gerard Butler and Frances O'Connor.
Eaters of the Dead was adapted into the 1999 film The 13th Warrior directed by John McTiernan, who was later removed, with Crichton himself taking over direction of reshoots.
Final novels and later life (2000–2008)
In 2002, Crichton published Prey, about developments in science and technology, specifically nanotechnology. The novel explores relatively recent phenomena engendered by the work of the scientific community, such as artificial life, emergence (and by extension, complexity), genetic algorithms, and agent-based computing.
In 2004, Crichton published State of Fear, a novel concerning eco-terrorists who attempt mass murder to support their views. Global warming serves as a central theme to the novel. A review in Nature found the novel "likely to mislead the unwary". The novel had an initial print run of 1.5 million copies and reached the No. 1 bestseller position at Amazon.com and No. 2 on The New York Times Best Seller list for one week in January 2005.
The last novel published while he was still living was Next in 2006. The novel follows many characters, including transgenic animals, in the quest to survive in a world dominated by genetic research, corporate greed, and legal interventions, wherein government and private investors spend billions of dollars every year on genetic research.
In 2006, Crichton clashed with journalist Michael Crowley, a senior editor of the magazine The New Republic. In March 2006, Crowley wrote a strongly critical review of State of Fear, focusing on Crichton's stance on global warming. In the same year, Crichton published the novel Next, which contains a minor character named "Mick Crowley", who is a Yale graduate and a Washington, D.C.–based political columnist. The character was portrayed as a child molester with a small penis. The character does not appear elsewhere in the book. The real Crowley, also a Yale graduate, alleged that by including a similarly named character Crichton had libeled him.
Posthumous works
Several novels that were in various states of completion upon Crichton's death have since been published. The first, Pirate Latitudes, was found as a manuscript on one of his computers after his death. It centers on a fictional privateer who attempts to raid a Spanish galleon. It was published in November 2009 by HarperCollins.
Additionally, Crichton had completed the outline for and was roughly a third of the way through a novel titled Micro, a novel which centers on technology that shrinks humans to microscopic sizes. Micro was completed by Richard Preston using Crichton's notes and files, and was published in November 2011.
On July 28, 2016, Crichton's website and HarperCollins announced the publication of a third posthumous novel, titled Dragon Teeth, which he had written in 1974. It is a historical novel set during the Bone Wars, and includes the real life characters of Othniel Charles Marsh and Edward Drinker Cope. The novel was released in May 2017.
In addition, some of his published works are being continued by other authors. On February 26, 2019, Crichton's website and HarperCollins announced the publication of The Andromeda Evolution, the sequel to The Andromeda Strain, a collaboration with CrichtonSun LLC. and author Daniel H. Wilson. It was released on November 12, 2019.
It was later announced that his unpublished works will be adapted into TV shows and movies in collaboration with CrichtonSun and Range Media Partners.
Scientific and legal career
Video games and computing
In 1983, Crichton wrote Electronic Life, a book that introduces BASIC programming to its readers. The book, written like a glossary, with entries such as "Afraid of Computers (everybody is)", "Buying a Computer", and "Computer Crime", was intended to introduce the idea of personal computers to a reader who might be faced with the hardship of using them at work or at home for the first time. It defined basic computer jargon and assured readers that they could master the machine when it inevitably arrived. In his words, being able to program a computer is liberation: "In my experience, you assert control over a computer—show it who's the boss—by making it do something unique. That means programming it. ... If you devote a couple of hours to programming a new machine, you'll feel better about it ever afterward." In the book, Crichton predicts a number of events in the history of computer development, that computer networks would increase in importance as a matter of convenience, including the sharing of information and pictures that we see online today which the telephone never could. He also makes predictions for computer games, dismissing them as "the hula hoops of the '80s", and saying "already there are indications that the mania for twitch games may be fading." In a section of the book called "Microprocessors, or how I flunked biostatistics at Harvard", Crichton again seeks his revenge on the teacher who had given him abnormally low grades in college. Within the book, Crichton included many self-written demonstrative Applesoft (for Apple II) and BASICA (for IBM PC compatibles) programs.
Amazon is a graphical adventure game created by Crichton and produced by John Wells. Trillium released it in the United States in 1984, and the game runs on Apple II, Atari 8-bit, Atari ST, Commodore 64, and DOS. Amazon sold more than 100,000 copies, making it a significant commercial success at the time. It featured plot elements similar to those previously used in Congo.
Crichton started a company selling a computer program he had originally written to help him create budgets for his movies. He often sought to utilize computing in films, such as Westworld, which was the first film to employ computer-generated special effects. He also pushed Spielberg to include them in the Jurassic Park films. For his pioneering use of computer programs in film production he was awarded the Academy Award for Technical Achievement in 1995.
Intellectual property cases
In November 2006, at the National Press Club in Washington, D.C., Crichton joked that he considered himself an expert in intellectual property law. He had been involved in several lawsuits with others claiming credit for his work.
In 1985, the United States Court of Appeals for the Ninth Circuit heard Berkic v. Crichton, 761 F.2d 1289 (1985). Plaintiff Ted Berkic wrote a screenplay called Reincarnation Inc., which he claims Crichton plagiarized for the movie Coma. The court ruled in Crichton's favor, stating the works were not substantially similar.
In the 1996 case, Williams v. Crichton, 84 F.3d 581 (2d Cir. 1996), Geoffrey Williams claimed that Jurassic Park violated his copyright covering his dinosaur-themed children's stories published in the late 1980s. The court granted summary judgment in favor of Crichton.
In 1998, A United States District Court in Missouri heard the case of Kessler v. Crichton that actually went all the way to a jury trial, unlike the other cases. Plaintiff Stephen Kessler claimed the movie Twister (1996) was based on his work Catch the Wind. It took the jury about 45 minutes to reach a verdict in favor of Crichton. After the verdict, Crichton refused to shake Kessler's hand.
Crichton later summarized his intellectual property legal cases: "I always win."
Global warming
Crichton became well known for attacking the science behind global warming. He testified on the subject before Congress in 2005.
His views would be contested by a number of scientists and commentators. An example is meteorologist Jeffrey Masters's review of Crichton's 2004 novel State of Fear:
Peter Doran, author of the paper in the January 2002 issue of Nature, which reported the finding referred to above that some areas of Antarctica had cooled between 1986 and 2000, wrote an opinion piece in the July 27, 2006, The New York Times in which he stated "Our results have been misused as 'evidence' against global warming by Michael Crichton in his novel State of Fear." Al Gore said on March 21, 2007, before a U.S. House committee: "The planet has a fever. If your baby has a fever, you go to the doctor ... if your doctor tells you you need to intervene here, you don't say 'Well, I read a science fiction novel that tells me it's not a problem'." Several commentators have interpreted this as a reference to State of Fear.
Literary technique and style
Crichton's novels, including Jurassic Park, have been described by The Guardian as "harking back to the fantasy adventure fiction of Sir Arthur Conan Doyle, Jules Verne, Edgar Rice Burroughs, and Edgar Wallace, but with a contemporary spin, assisted by cutting-edge technology references made accessible for the general reader". According to The Guardian, "Michael Crichton wasn't really interested in characters, but his innate talent for storytelling enabled him to breathe new life into the science fiction thriller". Like The Guardian, The New York Times has also noted the boys' adventure quality to his novels interfused with modern technology and science. According to The New York Times,
Crichton's works were frequently cautionary; his plots often portrayed scientific advancements going awry, commonly resulting in worst-case scenarios. A notable recurring theme in Crichton's plots is the pathological failure of complex systems and their safeguards, whether biological (Jurassic Park), militaristic/organizational (The Andromeda Strain), technological (Airframe), or cybernetic (Westworld). This theme of the inevitable breakdown of "perfect" systems and the failure of "fail-safe measures" strongly can be seen in the poster for Westworld, whose slogan was, "Where nothing can possibly go worng" , and in the discussion of chaos theory in Jurassic Park. His 1973 movie Westworld contains one of the earlier references to a computer virus and the first mention of the concept of a computer virus in a movie. Crichton believed, however, that his view of technology had been misunderstood as
The use of author surrogate was a feature of Crichton's writings from the beginning of his career. In A Case of Need, one of his pseudonymous whodunit stories, Crichton used first-person narrative to portray the hero, a Bostonian pathologist, who is running against the clock to clear a friend's name from medical malpractice in a girl's death from a hack-job abortion.
Crichton has used the literary technique known as the false document. Eaters of the Dead is a "recreation" of the Old English epic Beowulf presented as a scholarly translation of Ahmad ibn Fadlan's 10th century manuscript. The Andromeda Strain and Jurassic Park incorporate fictionalized scientific documents in the form of diagrams, computer output, DNA sequences, footnotes, and bibliography. The Terminal Man and State of Fear include authentic published scientific works which illustrate the premise point.
Crichton often employs the premise of diverse experts or specialists assembled to tackle a unique problem requiring their individual talents and knowledge. The premise was used for The Andromeda Strain, Sphere, Jurassic Park, and, to a lesser extent, Timeline. Sometimes the individual characters in this dynamic work in the private sector and are suddenly called upon by the government to form an immediate response team once some incident or discovery triggers their mobilization. This premise or plot device has been imitated and used by other authors and screenwriters in several books, movies and television shows since.
Personal life
As an adolescent Crichton felt isolated because of his height (6 ft 9 in, or 206 cm). During the 1970s and 1980s, he consulted psychics and enlightenment gurus to make him feel more socially acceptable and to improve his karma. As a result of these experiences, Crichton practiced meditation throughout much of his life. He is often regarded as a deist; however, he never publicly confirmed this. When asked in an online Q&A if he were a spiritual person, Crichton responded with "Yes, but it is difficult to talk about."
Crichton was a workaholic. When drafting a novel, which would typically take him six or seven weeks, Crichton withdrew completely to follow what he called "a structured approach" of ritualistic self-denial. As he neared writing the end of each book, he would rise increasingly early each day, meaning that he would sleep for less than four hours by going to bed at 10 p.m. and waking at 2 am.
In 1992, Crichton was ranked among People magazine's 50 most beautiful people.
He married five times. Four of the marriages ended in divorce: with Joan Radam (1965–1970), Kathleen St. Johns (1978–1980), Suzanna Childs (1981–1983), and actress Anne-Marie Martin (1987–2003), the mother of his daughter Taylor Anne (born 1989). At the time of his death, Crichton was married to Sherri Alexander (2005–2008), who was six months pregnant with their son, John Michael Todd Crichton, born on February 12, 2009.
Illness and death
According to Crichton's brother Douglas, Crichton was diagnosed with lymphoma in early 2008. In accordance with the private way in which Crichton lived, his cancer was not made public until his death. He was undergoing chemotherapy treatment at the time of his death, and Crichton's physicians and relatives had been expecting him to recover. He died at age 66 on November 4, 2008.
Crichton had an extensive collection of 20th-century American art, which Christie's auctioned in May 2010.
Reception
Science novels
Most of Crichton's novels address issues emerging in scientific research fields. In a number of his novels (Jurassic Park, The Lost World, Next, Congo), genomics plays an important role. Usually, the drama revolves around the sudden eruption of a scientific crisis, revealing the disruptive impacts new forms of knowledge and technology may have, as is stated in The Andromeda Strain, Crichton's first science novel: "This book recounts the five-day history of a major American scientific crisis" (1969, p. 3) or The Terminal Man where unexpected behaviors are realized when electrodes are implanted into a person's brain.
Awards
Mystery Writers of America's Edgar Allan Poe Award, Best Novel, 1969 – A Case of Need
Association of American Medical Writers Award, 1970
Mystery Writers of America's Edgar Allan Poe Award, Best Motion Picture, 1980 – The Great Train Robbery
Named to the list of the "Fifty Most Beautiful People" by People magazine, 1992
Golden Plate Award of the American Academy of Achievement, 1992
Academy of Motion Picture Arts and Sciences Technical Achievement Award, 1994
Writers Guild of America Award, Best Long Form Television Script of 1995 (The Writer Guild list the award for 1996)
George Foster Peabody Award, 1994 – ER
Primetime Emmy Award for Outstanding Drama Series, 1996 – ER
Ankylosaur named Crichtonsaurus bohlini, 2002
American Association of Petroleum Geologists Journalism Award, 2006
Speeches
Crichton was also a popular public speaker. He delivered a number of notable speeches in his lifetime, particularly on the topic of Global Warming.
Intelligence Squared debate
On March 14, 2007, Intelligence Squared held a debate in New York City titled Global Warming Is Not a Crisis, moderated by Brian Lehrer. Crichton was on the for the motion side with Richard Lindzen and Philip Stott against Gavin Schmidt, Richard Somerville, and Brenda Ekwurzel. Before the debate, the audience was largely on the 'against the motion' side (57% vs. 30%, with 13% undecided). At the end of the debate, there was a notable shift in the audience vote to prefer 'for the motion' side (46% vs. 42%, with 12% undecided), leaving the debate with the conclusion that Crichton's group won. Even though Crichton inspired a lot of blog responses and it was considered one of his best rhetorical performances, reception to his message was mixed.
Other speeches
Mediasaurus: The Decline of Conventional Media
In a speech delivered at the National Press Club in Washington, D.C. on April 7, 1993, Crichton predicted the decline of mainstream media.
Ritual Abuse, Hot Air, and Missed Opportunities: Science Views Media
The AAAS invited Crichton to address scientists' concerns about how they are portrayed in the media, delivered to the American Association for the Advancement of Science in Anaheim, California on January 25, 1999.
Environmentalism as Religion
This was not the first discussion of environmentalism as a religion, but it caught on and was widely quoted. Crichton explains his view that religious approaches to the environment are inappropriate and cause damage to the natural world they intend to protect. The speech was delivered to the Commonwealth Club in San Francisco, California on September 15, 2003.
Science Policy in the 21st century
Crichton outlined several issues before a joint meeting of liberal and conservative think tanks. The speech was delivered at AEI–Brookings Institution in Washington, D.C. on January 25, 2005.
The Case for Skepticism on Global Warming
On January 25, 2005 at the National Press Club in Washington, D.C., Crichton delivered a detailed explanation of why he criticized the consensus view on global warming. Using published UN data, he argued that claims for catastrophic warming arouse doubt; that reducing CO2 is vastly more difficult than is commonly presumed; and why societies are morally unjustified in spending vast sums on a speculative issue when people around the world are dying of starvation and disease.
Caltech Michelin Lecture
"Aliens Cause Global Warming" January 17, 2003. In the spirit of his science fiction writing Crichton details research on nuclear winter and SETI Drake equations relative to global warming science.
Testimony before the United States Senate
Crichton was invited to testify before the Senate in September 2005, as an "expert witness on global warming". The speech was delivered to the Committee on Environment and Public Works in Washington, D.C.
Complexity Theory and Environmental Management
In previous speeches, Crichton criticized environmental groups for failing to incorporate complexity theory. Here he explains in detail why complexity theory is essential to environmental management, using the history of Yellowstone Park as an example of what not to do. The speech was delivered to the Washington Center for Complexity and Public Policy in Washington, D.C. on November 6, 2005.
Genetic Research and Legislative Needs
While writing Next, Crichton concluded that laws covering genetic research desperately needed to be revised, and spoke to congressional staff members about problems ahead. The speech was delivered to a group of legislative staffers in Washington, D.C. on September 14, 2006.
Why Speculate?
In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect, after physicist Murray Gell-Mann. He used this term to describe the phenomenon of experts believing news articles on topics outside of their fields of expertise, even after acknowledging that articles written in the same publication that are within the experts' fields of expertise are error-ridden and full of misunderstanding:
Legacy
In 2002, a genus of ankylosaurid, Crichtonsaurus bohlini, was named in his honor. This species was concluded to be dubious however, and some of the diagnostic fossil material was then transferred into the new binomial Crichtonpelta benxiensis, also named in his honor. His properties continue to be adapted into films, making him the 20th highest grossing story creator of all time.
List of selected works
The Andromeda Strain (1969)
The Terminal Man (1972)
Jurassic Park (1990)
Disclosure (1994)
Prey (2002)
State of Fear (2004)
Next (2006)
Citations
General bibliography
External links
Musings on Michael Crichton — News and Analysis on his Life and Works
Michael Crichton Obituary. Associated Press. Chicago Sun-Times
Michael Crichton bibliography on the Internet Book List
Complete bibliography and cover gallery of the first editions
Comprehensive listing and info on Michael Crichton's complete works
Academics of the University of Cambridge
Academy Award for Technical Achievement winners
American film directors
American male non-fiction writers
American male novelists
American male screenwriters
American medical writers
American men's basketball players
American science fiction writers
American social commentators
American thriller writers
Cultural critics
Deaths from cancer in California
Deaths from lymphoma
Edgar Award winners
Environmental fiction writers
Film producers from Illinois
Film producers from New York (state)
Futurologists
Harvard College alumni
Harvard Crimson men's basketball players
Harvard Medical School alumni
Medical fiction writers
Novelists from Illinois
People from Roslyn, New York
Science fiction film directors
Screenwriters from Illinois
Screenwriters from New York (state)
Social critics
Techno-thriller writers
Television producers from Illinois
Television producers from New York (state)
Writers from Chicago
1942 births
2008 deaths
20th-century American male writers
20th-century American non-fiction writers
20th-century American novelists
21st-century American male writers
21st-century American non-fiction writers
21st-century American novelists
20th-century American screenwriters
20th-century pseudonymous writers |
12417652 | https://en.wikipedia.org/wiki/MicroScope | MicroScope | MicroScope is a digital magazine and website for IT professionals within the ICT channel in the United Kingdom. Based in London, the magazine is owned by TechTarget; it formerly published as a weekly print magazine under Dennis Publishing Ltd and Reed Business Information for over 29 years. The last printed edition was published on Monday 28 March 2011, leaving only the online edition. The magazines prominent focus is news, analysis, and assessment of issues within the channel marketplace. It was available free to professionals who meet the circulation requirements with it being funded through revenue received from display and classified advertising. In the late 1990s, MicroScope remarked in its masthead “MicroScope – The No.1 news weekly for computer resellers and suppliers”.
Founded in 1982, MicroScope was first circulated by Dennis Publishing Ltd at a time of fundamental change in British computer industry with the microcomputer revolution. Over time, the magazine coverage expanded as the ICT channel emerged. MicroScope’s layout and format changed, adding opinion columns, financial news, US news, European news, City news, MicroSoap, Microscope classified, Spotlight, cartoons, special reports, reader letters and crosswords. The magazine is recognizable by its red nameplate and full-size image covers. Originally, MicroScope started out as a trade newspaper from its recognisable broadsheet traits, containing upwards of 200 pages on certain editions; towards the end of print life MicroScope had evolved into a magazine with its page size and number of pages decreased.
The current editor-in-chief Simon Quicke succeeded Billy MacInnes in 2002. MicroScope claims to be the longest running channel publication in the United Kingdom. Since 2011, the magazines’ content has been published digitally in an e-zine format. As of 2008, its print edition had a weekly circulation of 22,275, 143% up from 1995. MicroScope’s digital magazine and website receives more than 100,000 page views each month and has a significantly higher circulation that its print edition when it transferred to an online format. The magazine’s readership is generally made up of volume distributors, value-added distributors, resellers, MSPs, VARs, ISVs, and technology consultants. It has been named “Computer Journal of the Year” in 1984 by the Computer Press Association, for excellence in the field of computer journalism.
History
1982-1990
Launched in 1982, the first issue of MicroScope was published on 23 September in broadsheet format. It was first produced by Dennis Publishing Ltd who during this period became a leading publisher of computer enthusiast magazines in the United Kingdom. Felix Dennis was chairman and Peter Jackson took control of the paper as its founding editor.
In the early 1980s, driven by a growth in sales of IBM microcomputers and with the arrival of distribution - particularly Northamber and Westcoast the ICT channel emerged.
1990-1999
In 1998, Dennis Publishing Ltd sold MicroScope to Reed Business Information, along with a number of other titles including The VAR, Network Reseller News and Business & Technology.
2000-2011
In the early 2000s, the magazine maintained its coverage of the industry which expanded to include specialist distributors, including the likes of Hammer, CMS Distribution, Magirus, Wick Hill and Zycko. The reseller level also adapted to increasingly complex customer needs with the emergence of technology and vertical market specialists.
MicroScope observed that the channel is moving more towards a subscription-based consumption model. There has been a growth in the number of managed service providers (MSPs) and most of the major distributors have established their own cloud marketplaces, providing applications and services.
The channel has continued to evolve and adapt to changing market needs and is now seen by many customers as the 'trusted advisors' helping them with their digital transformation needs. Over the course of the channel’s transformation the magazine consistently spectated and commented on the ICT channel marketplace, now through monthly digital magazines, but also with daily news content on the website.
From 2011
In March 2011, MicroScope was sold to TechTarget ceasing print edition publication and became an online magazine; the last printed edition was published on Monday 28 March 2011. Computer Weekly published its last print edition on Tuesday 5 April 2011 and similarly transferred to a digital format.
Content
Following the closure of the print edition, MicroScope became available only online and in a monthly digital format.
The classic audience for the magazine works within the 'two-tier' channel, either in distribution or at a reseller level. The readership includes volume distributors, value-added distributors, resellers, MSPs, VARs, ISVs and technology consultants.
Over the past 40 years the channel has matured and continues to go through a process of consolidation. During the time the magazine has been publishing computers have become mainstream and the concept of the microcomputer is fundamentally accepted in both workplace and home.
Personnel
Editors
The editor-in-chief, commonly known simply as “the Editor”, of MicroScope is charged with formulating the magazine's editorial policies and overseeing corporate operations. Since its 1982 founding, the editors have been:
Peter Jackson: 1982 - 1984
Guy Kewney: 1984 - 1986 (Served as editor at large from 1982 - 1984)
John Lettice: 1987 - 1991
Keith Rodgers: 1991 - 1996
Billy MacInnes: 1996 - 2002
Simon Quicke: 2002 - present
Awards
MicroScope was awarded “Computer Journal of the Year” in 1984 by the Computer Press Association, for excellence in the field of computer journalism.
MicroScope ACEs
The MicroScope Awards for Channel Excellence (MicroScope ACEs) were launched in 2007 as prestigious awards with the aim of rewarding the achievements of distributors and resellers across the channel.
The entry process for the awards would kick off around November or Late October with the award ceremony usually taking place in late May or Early June in Central London.
The awards were judged by a panel of industry experts including analysts, independent consultants and editorial staff from MicroScope. The shortlist drawn up by the judges was then posted online with the readership of the magazine then given an opportunity to vote for the winners.
Award Categories as follows:
Reseller
SME reseller of the year
Storage reseller of the year
AV reseller of the year
Networking/comms reseller of the year
security reseller of the year
Distributor
Security distributor of the year
Storage distributor of the year
Networking/comms distributor of the year
AV distributor of the year
Editor's choice
Vendor of the year
During their first 3 years, the Microscope ACE’s grew and established themselves as one of the leading industry awards. Following the sale of MicroScope to TechTarget, the 2011 awards ceremony was postponed.
The ACEs continued to run digitally until 2016 before the format was postponed awaiting future development.
25th Anniversary Awards
25th anniversary awards as follows:
Most influential person of the past 25 years
Most significant vendor of the past 25 years
Most significant distributor of the past 25 years
Most significant reseller of the past 25 years
Related Publications
MicroScope is a sister publication of ComputerWeekly.com and is part of the TechTarget network of websites that also includes SearchITchannel.com, which covers the channel activities in the US market.
See also
List of computer magazines
List of magazines in the United Kingdom
References
External links
MicroScope Official website
MicroScope ACEs Awards website
1982 establishments in the United Kingdom
2011 disestablishments in the United Kingdom
Monthly magazines published in the United Kingdom
Magazines established in 1982
Magazines disestablished in 2011
Online magazines with defunct print editions |
3085988 | https://en.wikipedia.org/wiki/History%20of%20BBC%20television%20idents | History of BBC television idents | The history of BBC television idents begins in the early 1950s, when the BBC first displayed a logo between programmes to identify its service. As new technology has become available, these devices have evolved from simple still black and white images to the sophisticated full colour short films seen today. With the arrival of digital services in the United Kingdom, and with them many more new channels, branding is perceived by broadcasters to be much more important, meaning that idents need to stand out from the competition.
This article describes the development of the BBC's main television channels' identities.
BBC Television Service/BBC One
Pre-1969
The original BBC Television Service was launched on 2 November 1936 and was taken off the air at the outbreak of war in September 1939, returning in June 1946. In December 1953 the first ident, nicknamed the "Bat's Wings", was introduced, an elaborate mechanical contraption constructed by designer Abram Games, which featured a tiny spinning globe in the centre, surrounded by two spinning "eyes", with lightning flashes to either side. The model was temperamental, and broke down shortly after it was filmed.
By the early 1960s the "Bat's Wings" had been superseded by the "BBC tv" logo within a circle, beneath which would appear a map of Britain split into the BBC's broadcast regions.
The channel's most famous emblem, the globe, appeared in its first guise on 30 September 1963. The first such ident featured the continuity announcer speaking over a rotating globe while a "BBC tv" caption would appear with the announcement, "This is BBC Television" being made.
The Noddy System Mechanical Globe (1969–1985)
On 15 November 1969, BBC1 began transmitting in colour, and introduced the first version of the "mirror globe" ident. The word "Colour", identifying this new feature, was included in the station ident, and separate, more expensive colour television licences were offered. Originally, the mirror globe had a blue logo and landmasses to enhance the clarity of the image on black and white screens. The BBC1 ident was later revised with the "Colour" identification being italicised. The globe was changed one last time on 5 September 1981 to the double striped BBC1 logo, sitting below a lime green and blue globe on navy blue background.
The Computer Originated World (COW) (1985–1991)
By 1985, computer graphics technology had progressed sufficiently that on 18 February the mechanical mirror globe was retired in favour of the new "Computer Originated World", or 'COW', which showed a semi-transparent blue globe with golden continents and gold "BBC1". It was created by the BBC graphics and computer departments with work starting on it in 1983, following the success of the electronic BBC2 ident and the clocks. The COW globe went down well with the public, and changed the perception of the channel. Also, for the first time, holding slides, trailers and promotions included the BBC1 golden logo, bringing the brand together. The COW globe also used the same clock face as before, with some changes.
The Virtual Globe (1991–1997)
The Computer Originated World was replaced on 16 February 1991 by a new virtual globe, designed by Martin Lambie-Nairn's branding agency, Lambie-Nairn, who had first made an impact with Channel 4's original 1982 ident. The idents were based on a filmed model but the composited and enhanced on a computer. They were played without a soundtrack off a modified Laserdisc player. The ident consisted of a figure "1" inside a rotating transparent globe surrounded by a swirling smoky atmosphere above the BBC's corporate logo – the bold italic letters B B C within three rhomboids, above three flashes.
The Balloon (1997–2002)
On 4 October 1997 the globe was updated, and took the form of a hot-air balloon filmed over various landmarks. The idents featured the new name of the channel: BBC One, renaming which continued across the rest of the BBC's channels, and also featured the new BBC corporate logo.
Rhythm & Movement (2002–2006)
A change in controller at BBC One saw the balloon globe icon become the shortest-lived ident package of the colour television era. The new controller, Lorraine Heggessey, made no secret of her hate for the Balloon idents, as she believed them to be slow, dull and boring and believed that they said nothing about a channel. Because of this opinion, she ordered a review of the current branding. Because of this review, after 39 years, the globe style was replaced on 29 March 2002 by new idents featuring a new multicultural theme. The relaunch also saw a new logo for the channel based upon that of BBC Two, though the logo was instead the BBC logo and the word "ONE" below it within a red box. The box style later became a common style for the BBC's channels.
The new idents were collaboratively called the 'Rhythm and Movement' idents and featured dancers at various locations dancing to different musical arrangements of Peter Lawlor's theme. These proved to be hugely unpopular; some viewers accused the BBC of being overtly politically correct, as one of the idents involved disabled dancers in wheelchairs, while other viewers were dismayed that the longstanding globe motif had been abandoned after 39 years. This was also the first new presentation package not to include a clock though one had been designed — it had become difficult to transmit the time accurately, given the delay introduced by satellites and digital transmission.
Circles (2006–2016)
After four years, the idents were replaced themselves by a new set introduced on 7 October 2006, abandoning the overtly red colour scheme yet retaining the colour slightly less obviously as the main colour. The relaunch brought about a new channel logo once more with the box replaced in favour of a lowercase name, effectively appearing as "BBC one".
The idents are based on a circle motif, with content much more diverse than the previous: swimming hippos, motorbike stunt riders, children playing "ring a roses", lit windows, surfers, football players, the moon, kites, and a red arc circling the logo. The first of the new idents shown was 'Kites', appearing at 9:58 BST on 7 October. According to former channel controller Peter Fincham, the new circle motif is both a link to the classic globe icon used since 1963, and a 'nod' to the channel's heritage, as well as a symbol of unity, in the way the channel brings people together. The "Moon" and "Windows" idents were dropped in July 2008. On 2 May 2009, the circle idents were edited with shorter video sequences and new soundtracks (except "Hippos" and "Surfers").
As of 2016, this was the longest time BBC One has ever stayed with one era of idents without changing.
Oneness (2017–present)
A new set of idents themed around "oneness" was introduced on 1 January 2017. Commissioned by the BBC's in-house creative agency, it shows groups of people coming together through their activities in everyday life.
In early 2020, some special "oneness" idents were created in response to the COVID-19 pandemic to reflect life under lockdown during the crisis.
BBC Two
Launch Ident (1964–1967)
BBC Two is the second BBC channel and the third channel to air in the UK. Following the success of , the third channel was awarded to the BBC. Following this, the BBC planned a brilliant night's entertainment for their opening night. However, the opening extravaganza was forced to reschedule from 20 April 1964 to the following evening, as the result of a massive power failure in west London.
The channel's first logo was an animation where blue and grey stripes flew in from the left and right before a "2" and the BBC corporate logo flew in from the top and bottom. The jingle that accompanied the ident was a fanfare based on the morsecode translation of 'BBC TWO', composed by Freddie Phillips. This logo lasted three years until the introduction of colour to BBC2. Along with the ident, there was also a flip card with a larger 2 on it that was used as a sting in many ways. Continuity was also frequently live in vision, with the announcer in a studio with BBC2 branding.
The Cube (1967–1974)
For the introduction of the first British colour broadcasting service in 1967 there was no new ident, but a new logo: a 2 with a dot inside the figure. There was originally a test period where the original presentation was used with possible electronic colour added. However, it was rejected and a new ident used. This ident was formed from the 3 light spots that come together to make the central dot. From there, a 2 is drawn around the dot. Once complete, the colour legend appears below before the 2 starts rotating slowly. The blue 2 rotates to reveal a red 2, green 2 and a white 2. The ident would continually spin after this. In 1969, when BBC1 launched its own colour service, the ident was updated to match BBC1's presentation. The 2 was now all white on a blue background, with the BBC2 logo underneath the 2. The clock that went alongside this look was originally a traditional clock face with Roman numerals before it was changed to a light blue clockface on a dark blue background with BBC2 legend below. This clock also had the 'polo' mint centre and was filmed on BBC2's NODD, black and white camera filming the clock with colour added later on.
The Stripes (1974–1979)
A facelift occurred on 28 December 1974, with the "2" being formed of blue and white lines, the different colours leaving and entering from opposite sides of the screen. This was formed from a mechanical model of 23 discs, each with an alternating colour line on them. The model was then run so that each disc spun a different way to the disc above and below it. This was the last mechanical model used on BBC2. The clock for this era was the same as the last era.
The Computer Generated 2 (1979–1986)
In June 1979, BBC2 adopted the world's first computer-generated ident, with the logo being drawn live every time it was played. This version of the 2 had orange double lines either side and the 2 itself was cream double lined. This stripy 2 had been used on promotions and holding slides for years prior to its launch, and now it was part of a bigger branding package. For the first few years of the ident, it would be accompanied by a fanfare as it scrolled onto the screen. There were versions of the ident where the ident scrolled on, scrolled off and remained static. The clock used alongside this high tech 3D ident, was anything but. The old clock filmed from the NODD room survived, with a new 2D logo and sporting black and orange colours. However, in 1980, BBC2 got its own electronic clock including 3D legend and centre dot.
The TWO (1986–1991)
On 30 March 1986, the electronically generated 2 was replaced with the letters T W O. The letters were on a white background, and themselves were more 3D versions of the white background. However, the T had some red on it and the W had green and blue on the diagonal parts. These colours are reminiscent of the light spots and the colours of the dotted 2 in 1967. As with the previous ident, the TWO would fade into and out of the white background. All of this was an effort to make the channel appear more "highbrow".
The 1991 2s (1991–2001, 2014–2018)
The channel's association with the Lambie-Nairn branding agency began in 1991 following on from when Alan Yentob became controller of BBC2 in 1987. Upon taking up office, Alan realised that a review needed to be carried out of the TWO. The research carried out a start of the review found that viewers saw the TWO as "dull" and "Old Fashioned", with it having a highbrow effect. To rectify this, Lambie-Nairn was hired to create a new identity. The 1986 ident was replaced by the start of the highly successful series of idents involving a sans serif '2'. The idents themselves were very simple, such as Paint or Water and this changed people's perception of the channel, despite the fact that only the idents had changed. These new idents brought corporate branding to BBC2 for the first time, facilitating cross channel continuity. The change also brought a unified feel to the channel. The new package also changed the style of idents. Previously the ident featured a single scene that would introduce all types of programmes. Having multiple idents (at one point, there were 40) allowed variation in how they were employed, as different idents could be used to introduce different genres of programme. Special idents were also created for themed nights and seasonal events such as Christmas. The idents were successful enough to survive the channel rebrand following BBC's adoption of a new corporate logo in 1997. The existing idents were updated with the new logo, and displayed along with several new idents which were added into rotation.
The clock that accompanied both of these looks was a slick circular clock face on a fading white and viridian background, with a combination of dots and dashes as its counters. Its only edit was in 1997 when it changed to feature the new BBC logo.
On 19 November 2001, the idents were withdrawn.
During late 2013 and early 2014, BBC Two England and BBC Two Northern Ireland resurrected a selection of these idents (together with some of the former idents) as part of their 'Afternoon Classics' segment. BBC Two Wales also broadcast "Powder" during the 50th Anniversary of BBC Wales. As of August 2014, some of the set have come back semi-permanently for the 50th Birthday of the channel in all regions. On 1 January 2015, BBC Two was rebranded with the 1990s idents returning as the only idents in use after a short break during the Christmas season. The 2015 rebrand consists of the 1991–2001 idents, but they now have the BBC Two 50 Years box replaced with BBC Two's regular teal-coloured box logo. However, BBC Two Northern Ireland airs the idents with the BBC Two Northern Ireland logo in the centre of the idents with the box removed.
The Personality 2s (2001–2007)
After nearly eleven years, the channel received a new look on 19 November 2001 (same day when ITV2 received a new look) with a set of robotic figure 2s, each displaying individual personalities. This set was also produced by Lambie-Nairn, and American visual effects technician Mic Graves served as a co-creator along with Martin Lambie-Nairn, who also created the previous set. BBC Two also became the first BBC channel to receive the new style of logo, in its case this was a purple box with the BBC logo stacked above 'TWO'.
At the time, BBC2 was perceived as not fitting in with the overall network branding, because all of the other BBC channels retained their 1997 idents. Also, rather than the variety of backgrounds seen in the previous presentation package, the new idents all featured a yellow background with a 3D figure 2 in white. Only four were originally produced, limiting the different options. For the first time on BBC2, the new idents did not feature a clock. The white numeral 2 and yellow background were occasionally replaced by previous idents from the 1991–2001 package, including "Predator/Venus Fly Trap" used to introduce the Chelsea Flower Show, as well as the Christmas 2000 ident which resurfaced to introduce coverage of the 2006 Winter Olympics. Some specially produced idents were used to highlight particular shows or programming strands, such as an updated "dog" ident to advertise the channel's "pedigree comedy". To mark the channel's 40th anniversary in 2004, a special ident combining glimpses of previous BBC2 idents was produced.
The 2007 2s (Window On The World) (2007–2014)
On 18 February 2007 the channel's presentation package was relaunched with the introduction of idents designed by advertising agency Abbott Mead Vickers BBDO and produced by Red Bee Media, costing £700,000 in total. Controller of BBC Two Roly Keating said, "These new idents embody all of BBC Two's distinctive humour, creativity, playfulness and surprise — and they're also beautifully-executed pieces of film-making in their own right". Entitled "Window on the World", these new idents feature a cut-out '2' made from various materials behind which different scenes can be seen. In addition, the on-screen presentation system and the BBC Two website have also been altered to reflect the new theme. This new look was also the first time that the 2 was altered from Lambie Nairn's original 2, however, the difference is minor and to many unnoticeable. During March 2013, the audio was replaced with new audio to coincide with the launch of BBC Two HD. During late 2014, the idents become scarcely used before being discontinued entirely.
The Curve 2s (2018–present)
On 27 September 2018, new idents based on a "curve" motif were introduced.
BBC Choice/BBC Three
BBC Choice
BBC Choice was launched on 23 September 1998 as part of the BBC's aim of expanding into Digital TV. BBC Choice started out as being the home for programmes that would complement those being shown on BBC One and BBC Two. As a result of this, Lambie-Nairn – the branding agency who designed all the BBC Choice looks – used three different objects which all shared a common theme or word.
On 10 July 2000, BBC Choice's remit was altered by the BBC to be aimed towards the young adult audience, and as a result the idents were changed. The new package featured one of the heart shapes at the centre of a brightly coloured screen, from which other heart shapes may form with different background colours inside.
On 6 July 2001, later on in the channel's life, the channel adopted the 3 orange boxes that would become infamous with the channel. These three orange boxes would be seen moving around an initially green, becoming blue a few months into the look, background.
At the time of the closure of BBC Choice, and the adoption of BBC Three, the idents were adapted once more. At first, the familiar orange boxes could be seen being demolished by a wrecking ball in the final months, and come December 2002, the whole identity was changed to a building site, complete with two builders.
BBC Three
The young-adult oriented BBC Three was launched on 9 February 2003, as the successor to BBC Choice. The official launch night revealed the towering three-dimensional figure "THREE" populated by small computer generated "blobs", given voices from the BBC Sound Archive. The channel logo featured a large slanted Three, below the BBC logo inside a box.
In 2008, BBC Three controller Danny Cohen unveiled a new brand for the channel, created by Red Bee Media, and designed to emphasise its new focus on cross-platform programming. The idents were introduced on 12 February 2008. The BBC logo is viewed as a pipe with pink liquid passing through it, spelling out the 'three'. These pipes, or pools of liquid are present in most of the idents, as are large objects that appeal to the young adult audience, or features technology such as television screens.
The new idents and presentation style were introduced on 1 October 2013, retaining the logo from 2008. The idents follow the theme of "discovery", and were designed by Claire Powell at Red Bee Media. The soundtrack for the idents was composed by Chris Branch and Tom Haines at Brains & Hunch. In 2016, the logo was changed to two pillars and an exclamation mark to promote its move to online only.
BBC Knowledge/BBC Four
BBC Knowledge
BBC Knowledge was launched on 1 June 1999 with the intent of creating a multimedia learning channel. The initial identity consisted of cartoon characters, illustrated by Michael Sheehy, against an orange background and climbing 'ladders of learning'. Individual idents for standard section were used for a time which featured an object, a fact about it and ends with a letter encircled at the centre of the screen. It is unclear whether these idents were replacements of the animated idents, or complementary to them. Following the relaunch in 2000 and 2001, all of the previous different idents were dropped in favour of a single ident, featuring numerous circles made out of different structures reflecting the new strands.
BBC Four
BBC Four was launched on 2 March 2002. The channel's first series of idents were dynamic and reacted to the frequencies of continuity announcers' voices or background music. As a result, no idents were ever the same, however variations were produced featuring different visualisations, such as semicircles, vibrating lines or shafts extending from the bottom surface. The channel also utilised a black box logo which was placed in the bottom right corner of the screen.
A new set of idents designed by Red Bee Media were introduced on 10 September 2005. These begin with one apparently normal image that breaks into four quarters, each showing a different view or reflection of the other. The screen sections are divided, so the child in a library may climb up a ladder from the bottom left section, and end up in the top right segment. The music used is almost exactly the same in all idents. They have had the longest lifespan out of all the BBC idents.
BBC News
BBC News 24/BBC News
The BBC News channel was launched as BBC News 24 at 17:30 GMT on Sunday, 9 November 1997 as the BBC's domestic 24-hour news channel and sister to BBC World. Between 1997 and 1999, the channel used idents based upon fictional flags as also used by BBC World albeit with different music. Both channels used the simple logo, featuring the BBC logo and the brand name, at the bottom of the screen with the ident emphasis on the flags.
A problem with BBC News output at the time, was that it was very much fragmented with different brands. BBC News bulletins, BBC Breakfast News and BBC News 24 all had separate identities. To solve this, a major relaunch of all BBC News television output, with the exception of Breakfast News, in 1999 saw the channel adopt a common theme with the rest of the BBC's main news bulletins. The redesign involved a new cream and red colour scheme together with a large numeral, representing the time of the bulletin on the domestic bulletins, but replaced with '24' for the channel, as well as a variation on the new musical composition by David Lowe.
In 2003 the channel was again relaunched with a new common BBC News style featuring a deep red and black globe and background with bright white lettering. The top of the hour sequence incorporated the first news headline into ribbons that formed part of the BBC News ident style. The first look featured a '24' numeral made out of lines bending in on screen, and was more of a cream colour than the white used by other bulletins: this was rectified in 2004.
An update to the BBC's graphics and playout systems in January 2007 also introduced new graphics for BBC News 24 and the news bulletins on BBC One. The new titles had the same look, feel and background reasoning as the previous but had been filmed again with new effects, while the graphics used maintained the previously introduced colour scheme. The new idents featured glossier graphics and rendering, with the ribbons being seen formed from different angles on the globe.
On Monday, 21 April 2008 at 08:30 GMT, BBC News 24 & BBC World were renamed as "BBC News" and "BBC World News" with an updated look to match that used by all BBC news output.
BBC Parliament
BBC Parliament took over from the cable-only Parliamentary Channel on 23 September 1998.
The channel's first on-screen branding featured the single line BBC Parliament logo over a background similar in style to water and accompanied by an orchestral musical score. This branding lasted until 2002, when it was replaced by a rotating spiral with regular teeth on the outside. As part of the sequence, occasional pulsating rings are emitted and the soundtrack is in a similar style to the David Lowe BBC News theme.
The channel took on a variation of the corporate BBC News theme in 2009 with a new ident, featuring a series of cogs moving about in place of the world in the BBC News look. The cogs have a predominantly red look to them and was designed to be an interpretation of the inner workings of Parliament. It mirrors elements of the BBC News presentation, indicating it is similar to, but not part of BBC News output.
BBC World News
BBC World News, the BBC's international 24-hour news and current affairs channel was launched on 16 January 1995 at 19:00 GMT. The channel has shared a common appearance with domestic news channel BBC News 24 since 1997, when both used the fictitious flags idents with different music. Following the major BBC News relaunch in 1999 across all output, the channel received an individual style based upon the theme introduced with music composed by David Lowe. These idents featured similar music, and a red ident using the rotational element, and also included rings in the design.
A subsequent relaunch of BBC News bulletins came in 2003 with new graphics and titles, produced by designers within the BBC itself. This design was subsequently updated in 2007. In both of these cases the design was the same as the idents and titles used by their counterparts in BBC News 24 and in BBC News as a whole. In April 2008 as part of a major £550,000 relaunch of BBC News output by Lambie-Nairn, BBC World became 'BBC World News' and was updated in its look to match that used by all BBC news output, and the BBC News Channel. BBC Arabic Television adopted a similar look in November 2009.
CBBC
CBBC was originally launched as Children's BBC in September 1985 as a strand of programming for children aged 6–13. The first idents consisted of Children's on top of a large BBC is a similar style to a rushed scribble, generated live on air by a BBC Micro computer. However, 2 years after launch, the idents were replaced by an animated sequence of the letters spelling out Children's with an outline of an object corresponding to that letter. This was replaced by a computer generated sequence in 1990.
In 1991, the BBC corporate revamp meant that Children's BBC was given a makeover. The result was a logo centred on a stylised Children's with a corporate BBC logo at the bottom of the screen. In 1994, the Children's BBC idents changed in style; many featured cartoons or computer generated graphics where the stylised Children's and the BBC corporate logo would feature somewhere.
Children's BBC was officially renamed CBBC in October 1997, with the production of appropriate idents. The idents all had a yellow background, and black subjects and often in the cartoon style.
When CBBC was given its own channel on the digital terrestrial platform on 11 February 2002, the CBBC "blob" ident was created. These animated 'bugs' were designed by Lambie-Nairn, and were always green in colour. The blob was later refreshed and given a 3D appearance in 2005.
CBBC relaunched again in the autumn of 2007, with a new logo revolving around the letters of CBBC, each in a different style. A new set of idents followed these up, revolving around scenes including each of the 4 letters before coming together at the end. These scenes could involve cartoon figures, or stars of current CBBC programmes. In 2010 the logo was updated to look more 3D.
These idents were changed on 13 September 2014, then later on 14 March 2016 with a new logo.
CBeebies/CBBC/BBC Kids
CBeebies was launched on the same day as the CBBC Channel: 11 February 2002, with an original age range of pre-school children only. Following changes within the BBC Children's department, this changed to ages up to 6, with CBBC targeting ages 8 to 12.
The idents for the channel, designed by Lambie-Nairn, are the same as at launch and consist of yellow blobs, the opposite to the green blobs launched with the CBBC Channel with a much younger feel, as befits the target audience. The yellow blobs were also seen in BBC Kids' then-current idents. They directly oppose the CBBC blobs, as these blobs are gentle in their actions and to look at, whereas the green CBBC blobs looked more outgoing and violent—one of the CBBC idents at the time involved karate.
See also
BBC One 'Virtual Globe' ident
BBC One 'Balloon' idents
BBC One 'Rhythm & Movement' idents
BBC One 'Circle' idents
BBC One 'Oneness' idents
BBC Two '1991–2001' idents
BBC Two 'Personality' idents
BBC Two 'Window on the World' idents
BBC Two 'Curve' idents
History of ITV television idents
Logo of the BBC
List of BBC test cards
Test Card F
Notes
References
External links
bbc.co.uk
TV Ark – The Television Museum
The Ident Zone
The TV Room
The TV Room Plus
Idents
Television presentation in the United Kingdom
BBC television idents
Television idents
BBC television idents |
55347 | https://en.wikipedia.org/wiki/Hierarchical%20File%20System | Hierarchical File System | Hierarchical File System (HFS) is a proprietary file system developed by Apple Inc. for use in computer systems running Mac OS. Originally designed for use on floppy and hard disks, it can also be found on read-only media such as CD-ROMs. HFS is also referred to as Mac OS Standard (or HFS Standard), while its successor, HFS Plus, is also called Mac OS Extended (or HFS Extended).
With the introduction of Mac OS X 10.6, Apple dropped support for formatting or writing HFS disks and images, which remain supported as read-only volumes. Starting with macOS 10.15, HFS disks can no longer be read.
History
Apple introduced HFS in September 1985, specifically to support Apple's first hard disk drive for the Macintosh, replacing the Macintosh File System (MFS), the original file system which had been introduced over a year and a half earlier with the first Macintosh computer. HFS drew heavily upon Apple's first hierarchical operating system (SOS) for the failed Apple III, which also served as the basis for hierarchical file systems on the Apple IIe and Apple Lisa. HFS was developed by Patrick Dirks and Bill Bruffey. It shared a number of design features with MFS that were not available in other file systems of the time (such as DOS's FAT). Files could have multiple forks (normally a data and a resource fork), which allowed the main data of the file to be stored separately from resources such as icons that might need to be localized. Files were referenced with unique file IDs rather than file names, and file names could be 255 characters long (although the Finder only supported a maximum of 31 characters).
However, MFS had been optimized to be used on very small and slow media, namely floppy disks, so HFS was introduced to overcome some of the performance problems that arrived with the introduction of larger media, notably hard drives. The main concern was the time needed to display the contents of a folder. Under MFS all of the file and directory listing information was stored in a single file, which the system had to search to build a list of the files stored in a particular folder. This worked well with a system with a few hundred kilobytes of storage and perhaps a hundred files, but as the systems grew into megabytes and thousands of files, the performance degraded rapidly.
The solution was to replace MFS's directory structure with one more suitable to larger file systems. HFS replaced the flat table structure with the Catalog File which uses a B-tree structure that could be searched very quickly regardless of size. HFS also redesigned various structures to be able to hold larger numbers, 16-bit integers being replaced by 32-bit almost universally. Oddly, one of the few places this "upsizing" did not take place was the file directory itself, which limits HFS to a total of 65,535 files on each logical disk.
While HFS is a proprietary file system format, it is well-documented; there are usually solutions available to access HFS-formatted disks from most modern operating systems.
Apple introduced HFS out of necessity with its first 20 MB hard disk offering for the Macintosh in September 1985, where it was loaded into RAM from a MFS floppy disk on boot using a patch file ("Hard Disk 20"). However, HFS was not widely introduced until it was included in the 128K ROM that debuted with the Macintosh Plus in January 1986 along with the larger 800 KB floppy disk drive for the Macintosh that also used HFS. The introduction of HFS was the first advancement by Apple to leave a Macintosh computer model behind: the original 128K Macintosh, which lacked sufficient memory to load the HFS code and was promptly discontinued.
In 1998, Apple introduced HFS Plus to address inefficient allocation of disk space in HFS and to add other improvements. HFS is still supported by current versions of Mac OS, but starting with Mac OS X, an HFS volume cannot be used for booting, and beginning with Mac OS X 10.6 (Snow Leopard), HFS volumes are read-only and cannot be created or updated. In macOS Sierra (10.12), Apple's release notes state that "The HFS Standard filesystem is no longer supported." However, read-only HFS Standard support is still present in Sierra and works as it did in previous versions.
Design
A storage volume is inherently divided into logical blocks of 512 bytes. The Hierarchical File System groups these logical blocks into allocation blocks, which can contain one or more logical blocks, depending on the total size of the volume. HFS uses a 16-bit value to address allocation blocks, limiting the number of allocation blocks to 65,535 (216-1).
Five structures make up an HFS volume:
Logical blocks 0 and 1 of the volume are the Boot Blocks, which contain system startup information. For example, the names of the System and Shell (usually the Finder) files which are loaded at startup.
Logical block 2 contains the Master Directory Block (aka MDB). This defines a wide variety of data about the volume itself, for example date & time stamps for when the volume was created, the location of the other volume structures such as the Volume Bitmap or the size of logical structures such as allocation blocks. There is also a duplicate of the MDB called the Alternate Master Directory Block (aka Alternate MDB) located at the opposite end of the volume in the second to last logical block. This is intended mainly for use by disk utilities and is only updated when either the Catalog File or Extents Overflow File grow in size.
Logical block 3 is the starting block of the Volume Bitmap, which keeps track of which allocation blocks are in use and which are free. Each allocation block on the volume is represented by a bit in the map: if the bit is set then the block is in use; if it is clear then the block is free to be used. Since the Volume Bitmap must have a bit to represent each allocation block, its size is determined by the size of the volume itself.
The Extent Overflow File is a B-tree that contains extra extents that record which allocation blocks are allocated to which files, once the initial three extents in the Catalog File are used up. Later versions also added the ability for the Extent Overflow File to store extents that record bad blocks, to prevent the file system from trying to allocate a bad block to a file.
The Catalog File is another B-tree that contains records for all the files and directories stored in the volume. It stores four types of records. Each file consists of a File Thread Record and a File Record while each directory consists of a Directory Thread Record and a Directory Record. Files and directories in the Catalog File are located by their unique Catalog Node ID (or CNID).
A File Thread Record stores just the name of the file and the CNID of its parent directory.
A File Record stores a variety of metadata about the file including its CNID, the size of the file, three timestamps (when the file was created, last modified, last backed up), the first file extents of the data and resource forks and pointers to the file's first data and resource extent records in the Extent Overflow File. The File Record also stores two 16 byte fields that are used by the Finder to store attributes about the file including things like its creator code, type code, the window the file should appear in and its location within the window.
A Directory Thread Record stores just the name of the directory and the CNID of its parent directory.
A Directory Record which stores data like the number of files stored within the directory, the CNID of the directory, three timestamps (when the directory was created, last modified, last backed up). Like the File Record, the Directory Record also stores two 16 byte fields for use by the Finder. These store things like the width & height and x & y co-ordinates for the window used to display the contents of the directory, the display mode (icon view, list view, etc.) of the window and the position of the window's scroll bar.
Limitations
The Catalog File, which stores all the file and directory records in a single data structure, results in performance problems when the system allows multitasking, as only one program can write to this structure at a time, meaning that many programs may be waiting in queue due to one program "hogging" the system. It is also a serious reliability concern, as damage to this file can destroy the entire file system. This contrasts with other file systems that store file and directory records in separate structures (such as DOS's FAT file system or the Unix File System), where having structure distributed across the disk means that damaging a single directory is generally non-fatal and the data may possibly be re-constructed with data held in the non-damaged portions.
Additionally, the limit of 65,535 allocation blocks resulted in files having a "minimum" size equivalent 1/65,535th the size of the disk. Thus, any given volume, no matter its size, could only store a maximum of 65,535 files. Moreover, any file would be allocated more space than it actually needed, up to the allocation block size. When disks were small, this was of little consequence, because the individual allocation block size was trivial, but as disks started to approach the 1 GB mark, the smallest amount of space that any file could occupy (a single allocation block) became excessively large, wasting significant amounts of disk space. For example, on a 1 GB disk, the allocation block size under HFS is 16 KB, so even a 1 byte file would take up 16 KB of disk space. This situation was less of a problem for users having large files (such as pictures, databases or audio) because these larger files wasted less space as a percentage of their file size. Users with many small files, on the other hand, could lose a copious amount of space due to large allocation block size. This made partitioning disks into smaller logical volumes very appealing for Mac users, because small documents stored on a smaller volume would take up much less space than if they resided on a large partition. The same problem existed in the FAT16 file system.
HFS saves the case of a file that is created or renamed but is case-insensitive in operation.
According to bombich.com, HFS is no longer supported on Catalina and future macOS releases.
See also
Comparison of file systems
APFS
HFS Plus
References
External links
HFS specification from developer.apple.com
The HFS Primer (PDF) from MWJ - dead link as of 27. May 2017
Filesystems HOWTO: HFS - slightly out of date
HFS File Structure Explained - early description of HFS
DiskWarrior - Software to eliminate all damage to the HFS disk directory
MacDrive - Software to read and write HFS/HFS Plus-formatted disks on Microsoft Windows
hfsutils - open-source software to manipulate HFS on Unix, DOS, Windows, OS/2
Disk file systems
Apple Inc. file systems
Macintosh operating systems
Computer file systems |
17107125 | https://en.wikipedia.org/wiki/ZTerm | ZTerm | ZTerm is a shareware terminal emulator for Macintosh operating system. It was introduced in 1992 for System 7 and has been updated to run on macOS. Its name comes from its use of the ZModem file transfer protocol, which ZTerm implemented in a particularly high-performance package. In contrast to the built-in macOS Terminal app, which only communicates with other programs, ZTerm only communicates with hardware serial ports.
Description
When it was first introduced in 1992, ZTerm was one of the highest performing terminal emulators on the Mac, both in terms of basic text display as well as file transfer performance. ZTerm was widely regarded as the best terminal program on the Mac.
Its hardware support included carrier detect (CD), hardware hangup (DTR) and hardware flow control, as well as speeds up to 119,200 bit/s on those machines that supported it. These features were not universally supported in Mac hardware, so many terminal emulators simply didn't bother to implement them at all. Even if these speeds were offered, most emulators of the era were so slow that they had trouble keeping up with faster modems, especially 9600 bit/s and faster.
ZTerm supported one of the widest variety of file transfer protocols available on the Mac, including a full implementation of ZModem, YModem, YModem-G, almost all of the common varieties of XModem with different packet sizes and error correction methods, and even the rare but useful B protocol (CIS-B) for use on Compuserve. ZTerm also supported auto-starting transfers from ZModem and CIS-B, where commands from the host triggered transfers from the client.
Additionally, ZTerm included a complete PC graphics character set and ANSI escape codes, including color. This made it one of the few terminals on the Mac that properly displayed ASCII art, and allowed full interaction with PC-based bulletin board systems (BBS) that used these features extensively. ZTerm added the ability to use the mouse to position the cursor, sending the correct stream of ANSI codes to move it from the current to the clicked location.
Finally, ZTerm included a 10-verb built-in scripting language that allowed it to automate basic tasks. In addition to be able to run these manually, when a service was dialed using an entry in the editable Dial menu, ZTerm would look for a script with the same name and run it automatically.
Versions
The first public version of ZTerm was 0.9, which was released in 1992. Two major versions followed; 1.0 of April 1994 was a major release that added 16-color ANSI support instead of 8-color, user-selected fonts including Shift JIS support, Kermit protocol support, and auto-opening of downloaded files. The latter was useful when used with offline mail readers like Blue Wave. Version 1.0.1, released in October 1995, was mostly a bug-fix release.
By the time that macOS was being released around 2002, the BBS world had largely disappeared. However, a number of devices (including some routers and lab equipment) still use serial ports to communicate, typically for diagnostic and debugging purposes. On 19 April 2001, Alverson released version 1.1b4 that ran on Mac OS X 10.0, Mac OS 8.6 and Mac OS 9 using Carbon. Later a "Classic" version was released that did not require Carbon, allowing it to run on older machines that could not support Mac OS 8 or Mac OS 9. On 18 July 2011, Alverson released a Universal Binary version 1.2 that runs on Mac OS X 10.4 through Mac OS X 10.14. Because this version is not 64-bit, however, ZTerm will not run on Mac OS X 10.15 and above.
On modern machines without built-in serial ports, ZTerm can identify and use a wide variety of USB-based serial devices. The list of supported hardware includes the standard Macintosh serial ports and Geoport on pre-PowerPC G3 CPU PowerPC Macintosh computers, the built-in Apple internal modem slot and the USB ports on PowerPC G3, PowerPC G4, PowerPC G5 and Intel-based Macintosh computers, and can be configured to work with adapters (including various USB port to serial port adaptors - such as those made by Keyspan, and Apple internal modem slot to serial port adaptors - like the Stealth Serial Port and the now discontinued Griffin Technology gPort, under OS X), giving it a unique use for BBSers and hardware tinkerers.
References
External links
ZTerm 0.9 FAQ 1.6
Classic Mac OS software
MacOS software
Terminal emulators |
1060721 | https://en.wikipedia.org/wiki/Basic%20Linear%20Algebra%20Subprograms | Basic Linear Algebra Subprograms | Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain.
Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are indifferent to the BLAS library being used. BLAS implementations have known a spectacular explosion in uses with the development of GPGPU, with cuBLAS and rocBLAS being prime examples. CPU-based examples of BLAS libraries include: OpenBLAS, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (MKL). AMD maintains a fork of BLIS that is optimized for the AMD platform. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. MKL is a freeware and proprietary vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors. OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The LINPACK benchmarks rely heavily on the BLAS routine gemm for its performance measurements.
Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including LAPACK, LINPACK, Armadillo, GNU Octave, Mathematica, MATLAB, NumPy, R, and Julia.
Background
With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's Scientific Subroutine Package (SSP). These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms.
Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine needed to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems). Between 1973 and 1977, several of these kernel operations were identified. These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using scalars and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979. BLAS was used to implement the linear algebra subroutine library LINPACK.
The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.)
Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS.
The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed.
Functionality
BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take linear time, , Level 2 operations quadratic time and Level 3 operations cubic time. Modern BLAS implementations typically provide all three levels.
Level 1
This level consists of all the routines described in the original presentation of BLAS (1979), which defined only vector operations on strided arrays: dot products, vector norms, a generalized vector addition of the form
(called "axpy", "a x plus y") and several other operations.
Level 2
This level contains matrix-vector operations including, among other things, a generalized matrix-vector multiplication (gemv):
as well as a solver for in the linear equation
with being triangular. Design of the Level 2 BLAS started in 1984, with results published in 1988. The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler."
Level 3
This level, formally published in 1990, contains matrix-matrix operations, including a "general matrix multiplication" (gemm), of the form
where and can optionally be transposed or hermitian-conjugated inside the routine, and all three matrices may be strided. The ordinary matrix multiplication can be performed by setting to one and to an all-zeros matrix of the appropriate size.
Also included in Level 3 are routines for computing
where is a triangular matrix, among other functionality.
Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS, and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, gemm is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of , into block matrices, gemm can be implemented recursively. This is one of the motivations for including the parameter, so the results of previous blocks can be accumulated. Note that this decomposition requires the special case which many implementations optimize for, thereby eliminating one multiplication for each value of . This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system. For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by Kazushige Goto have shown that blocking only for the L2 cache, combined with careful amortizing of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the GotoBLAS, OpenBLAS and BLIS.
A common variation of is the , which calculates a complex product using "three real matrix multiplications and five real matrix additions instead of the conventional four real matrix multiplications and two real matrix additions", an algorithm similar to Strassen algorithm first described by Peter Ungar.
Implementations
Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK.
Arm Performance Libraries Arm Performance Libraries, supporting Arm 64-bit AArch64-based processors, available from Arm.
ATLAS Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran 77.
BLIS BLAS-like Library Instantiation Software framework for rapid instantiation. Optimized for most modern CPUs. BLIS is a complete refactoring of the GotoBLAS that reduces the amount of code that must be written for a given platform.
C++ AMP BLAS The C++ AMP BLAS Library is an open source implementation of BLAS for Microsoft's AMP language extension for Visual C++.
cuBLAS Optimized BLAS for NVIDIA based GPU cards, requiring few additional library calls.
NVBLAS Optimized BLAS for NVIDIA based GPU cards, providing only Level 3 functions, but as direct drop-in replacement for other BLAS libraries.
clBLAS An OpenCL implementation of BLAS by AMD. Part of the AMD Compute Libraries.
clBLAST A tuned OpenCL implementation of most of the BLAS api.
Eigen BLAS A Fortran 77 and C BLAS library implemented on top of the MPL-licensed Eigen library, supporting x86, x86-64, ARM (NEON), and PowerPC architectures.
ESSL IBM's Engineering and Scientific Subroutine Library, supporting the PowerPC architecture under AIX and Linux.
GotoBLAS Kazushige Goto's BSD-licensed implementation of BLAS, tuned in particular for Intel Nehalem/Atom, VIA Nanoprocessor, AMD Opteron.
GNU Scientific Library Multi-platform implementation of many numerical routines. Contains a CBLAS interface.
HP MLIB HP's Math library supporting IA-64, PA-RISC, x86 and Opteron architecture under HP-UX and Linux.
Intel MKL The Intel Math Kernel Library, supporting x86 32-bits and 64-bits, available free from Intel. Includes optimizations for Intel Pentium, Core and Intel Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and macOS.
MathKeisan NEC's math library, supporting NEC SX architecture under SUPER-UX, and Itanium under Linux
Netlib BLAS The official reference implementation on Netlib, written in Fortran 77.
Netlib CBLAS Reference C interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C.
OpenBLAS Optimized BLAS based on GotoBLAS, supporting x86, x86-64, MIPS and ARM processors.
PDLIB/SX NEC's Public Domain Mathematical Library for the NEC SX-4 system.
rocBLAS Implementation that runs on AMD GPUs via ROCm.
SCSL
SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations.
Sun Performance Library Optimized BLAS and LAPACK for SPARC, Core and AMD64 architectures under Solaris 8, 9, and 10 as well as Linux.
uBLAS A generic C++ template class library providing BLAS functionality. Part of the Boost library. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features.
Libraries using BLAS
Armadillo Armadillo is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by NICTA (in Australia) and is licensed under a free license.
LAPACK LAPACK is a higher level Linear Algebra library built upon BLAS. Like BLAS, a reference implementation exists, but many alternatives like libFlame and MKL exist.
Mir An LLVM-accelerated generic numerical library for science and machine learning written in D. It provides generic linear algebra subprograms (GLAS). It can be built on a CBLAS implementation.
Similar libraries (not compatible with BLAS)
Elemental Elemental is an open source software for distributed-memory dense and sparse-direct linear algebra and optimization.
HASEM is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License.
LAMA The Library for Accelerated Math Applications (LAMA) is a C++ template library for writing numerical solvers targeting various kinds of hardware (e.g. GPUs through CUDA or OpenCL) on distributed memory systems, hiding the hardware specific programming from the program developer
MTL4 The Matrix Template Library version 4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to generic programming.
Sparse BLAS
Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines was finally standardized in 2002.
Batched BLAS
The traditional BLAS functions have been also ported to architectures that support large amounts of parallelism such as GPUs. Here, the traditional BLAS functions provide typically good performance for large matrices. However, when computing e.g., matrix-matrix-products of many small matrices by using the GEMM routine, those architectures show significant performance losses. To address this issue, in 2017 a batched version of the BLAS function has been specified.
Taking the GEMM routine from above as an example, the batched version performs the following computation simultaneously for many matrices:
The index in square brackets indicates that the operation is performed for all matrices in a stack. Often, this operation is implemented for a strided batched memory layout where all matrices follow concatenated in the arrays , and .
Batched BLAS functions can be a versatile tool and allow e.g. a fast implementation of exponential integrators and Magnus integrators that handle long integration periods with many time steps. Here, the matrix exponentiation, the computationally expensive part of the integration, can be implemented in parallel for all time-steps by using Batched BLAS functions.
See also
List of numerical libraries
Math Kernel Library, math library optimized for the Intel architecture; includes BLAS, LAPACK
Numerical linear algebra, the type of problem BLAS solves
References
Further reading
J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 14 (1988), pp. 18–32.
J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 1–17.
J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 18–28.
New BLAS
L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Softw., 28-2 (2002), pp. 135–151.
J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1–111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115–199.
External links
BLAS homepage on Netlib.org
BLAS FAQ
BLAS Quick Reference Guide from LAPACK Users' Guide
Lawson Oral History One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA.
Dongarra Oral History In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA
How does BLAS get such extreme performance? Ten naive 1000×1000 matrix multiplications (1010 floating point multiply-adds) takes 15.77 seconds on 2.6 GHz processor; BLAS implementation takes 1.32 seconds.
An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum
Numerical linear algebra
Numerical software
Public-domain software with source code |
62778843 | https://en.wikipedia.org/wiki/Static%20application%20security%20testing | Static application security testing | Static application security testing (SAST) is used to secure software by reviewing the source code of the software to identify sources of vulnerabilities. Although the process of statically analyzing the source code has existed as long as computers have existed, the technique spread to security in the late 90s and the first public discussion of SQL injection in 1998 when Web applications integrated new technologies like JavaScript and Flash.
Unlike dynamic application security testing (DAST) tools for black-box testing of application functionality, SAST tools focus on the code content of the application, white-box testing.
A SAST tool scans the source code of applications and its components to identify potential security vulnerabilities in their software and architecture.
Static analysis tools can detect an estimated 50% of existing security vulnerabilities.
In SDLC, SAST is performed early in the development process and at code level, and also when all pieces of code and components are put together in a consistent testing environment. SAST is also used for software quality assurance. even if the many resulting false-positive impede its adoption by developers
SAST tools are integrated into the development process to help development teams as they are primarily focusing on developing and delivering software respecting requested specifications.
SAST tools, like other security tools, focus on reducing the risk of downtime of applications or that private information stored in applications will not be compromised.
For the year of 2018, the Privacy Rights Clearinghouse database shows that more than 612 million records have been compromised by hacking.
Overview
Application security tests of applications their release: static application security testing (SAST), dynamic application security testing (DAST), and interactive application security testing (IAST), a combination of the two.
Static analysis tools examine the text of a program syntactically. They look for a fixed set of patterns or rules in the source code. Theoretically, they can also examine a compiled form of the software. This technique relies on instrumentation of the code to do the mapping between compiled components and source code components to identify issues.
Static analysis can be done manually as a code review or auditing of the code for different purposes, including security, but it is time-consuming.
The precision of SAST tool is determined by its scope of analysis and the specific techniques used to identify vulnerabilities. Different levels of analysis include:
function level - sequences of instruction.
file or class-level - an extensible program-code-template for object creation.
application level - a program or group of programs that interact.
The scope of the analysis determines its accuracy and capacity to detect vulnerabilities using contextual information.
At a function level, a common technique is the construction of an Abstract syntax tree to control the flow of data within the function.
Since late 90s, the need to adapt to business challenges has transformed software development with componentization. enforced by processes and organization of development teams
Following the flow of data between all the components of an application or group of applications allows validation of required calls to dedicated procedures for sanitization and that proper actions are taken to taint data in specific pieces of code.
The rise of web applications entailed testing them: Verizon Data Breach reports in 2016 that 40% of all data breaches use web application vulnerabilities.
As well as external security validations, there is a rise in focus on internal threats. The Clearswift Insider Threat Index (CITI) has reported that 92% of their respondents in a 2015 survey said they had experienced IT or security incidents in the previous 12 months and that 74% of these breaches were originated by insiders. Lee Hadlington categorized internal threats in 3 categories: malicious, accidental, and unintentional. Mobile applications' explosive growth implies securing applications earlier in the development process to reduce malicious code development.
SAST strengths
The earlier a vulnerability is fixed in the SDLC, the cheaper it is to fix. Costs to fix in development are 10 times lower than in testing, and 100 times lower than in production.
SAST tools run automatically, either at the code level or application-level and do not require interaction. When integrated into a CI/CD context, SAST tools can be used to automatically stop the integration process if critical vulnerabilities are identified.
Because the tool scans the entire source-code, it can cover 100% of it, while dynamic application security testing covers its execution possibly missing part of the application, or unsecured configuration in configuration files.
SAST tools can offer extended functionalities such as quality and architectural testing. There is a direct correlation between the quality and the security. Bad quality software is also poorly secured software.
SAST weaknesses
Even though developers are positive about the usage of SAST tools, there are different challenges to the adoption of SAST tools by developers.
With Agile Processes in software development, early integration of SAST generates many bugs, as developers using this framework focus first on features and delivery.
Scanning many lines of code with SAST tools may result in hundreds or thousands of vulnerability warnings for a single application. It generates many false-positives, increasing investigation time and reducing trust in such tools. This is particularly the case when the context of the vulnerability cannot be caught by the tool
References
Software
Computer security software
Program analysis
Software development process
Agile software development |
13611 | https://en.wikipedia.org/wiki/Heretic%20%28video%20game%29 | Heretic (video game) | Heretic is a dark fantasy first-person shooter video game released in 1994. It was developed by Raven Software and published by id Software through GT Interactive.
Using a modified version of the Doom engine, Heretic was one of the first first-person games to feature inventory manipulation and the ability to look up and down. It also introduced multiple gib objects that spawned when a character suffered a death by extreme force or heat. Previously, the character would simply crumple into a heap. The game used randomised ambient sounds and noises, such as evil laughter, chains rattling, distantly ringing bells, and water dripping in addition to the background music to further enhance the atmosphere. The music in the game was composed by Kevin Schilder. An indirect sequel, Hexen: Beyond Heretic, was released the following year. Heretic II was released in 1998, which served as a direct sequel continuing the story.
Plot
Three brothers (D'Sparil, Korax, and Eidolon), known as the Serpent Riders, have used their powerful magic to possess seven kings of Parthoris, turning them into mindless puppets and corrupting their armies. The Sidhe elves resist the Serpent Riders' magic. The Serpent Riders thus declared the Sidhe as heretics and waged war against them. The Sidhe are forced to take a drastic measure to sever the natural power of the kings destroying them and their armies, but at the cost of weakening the elves' power, giving the Serpent Riders an advantage to slay the elders. While the Sidhe retreat, one elf (revealed to be named Corvus in Heretic II) sets off on a quest of vengeance against the weakest of the three Serpent Riders, D'Sparil. He travels through the "City of the Damned", the ruined capital of the Sidhe (its real name is revealed to be Silverspring in Heretic II), then past the demonic breeding grounds of Hell's Maw and finally the secret Dome of D'Sparil.
The player must first fight through the undead hordes infesting the location where the elders performed their ritual. At its end is the gateway to Hell's Maw, guarded by the Iron Liches. After defeating them, the player must seal the portal and so prevent further infestation, but after he enters the portal guarded by the Maulotaurs, he finds himself inside D'Sparil's dome. After killing D'Sparil, Corvus ends up on a perilous journey with little hope of returning home. However, he eventually succeeds in his endeavour, only to find that Parthoris is in disarray once again.
Gameplay
The gameplay of Heretic is heavily derived from Doom, with a level-based structure and an emphasis on finding the proper keys to progress. Many weapons are similar to those from Doom; the early weapons in particular are near-exact copies in functionality to those seen in Doom. Raven added a number of features to Heretic that differentiated it from Doom, notably interactive environments, such as rushing water that pushes the player along, and inventory items. In Heretic, the player can pick up many different items to use at their discretion. These items range from health potions to the "morph ovum", which transforms enemies into chickens. One of the most notable pickups that can be found is the "Tome of Power" which acts as a secondary firing mode for certain weapons, resulting in a much more powerful projectile from each weapon, some of which change the look of the projectile entirely. Heretic also features an improved version of the Doom engine, sporting the ability to look up and down within constraints, as well as fly. However, the rendering method for looking up and down merely uses a proportional pixel-shearing effect rather than any new rendering algorithm, which distorts the view considerably when looking at high-elevation angles.
As with Doom, Heretic contains various cheat codes that allow the player to be invulnerable, obtain every weapon, be able to instantly kill every monster in a particular level, and several other abilities. If the player uses the "all weapons and keys" cheat ("IDKFA") from Doom, a message appears warning the player against cheating and takes away all of his weapons, leaving him with only a quarterstaff. If the player uses the "god mode" cheat ("IDDQD") from Doom, the game will display a message saying "Trying to cheat, eh? Now you die!" and kills the player character.
The original shareware release of Heretic came bundled with support for online multiplayer through the new DWANGO service.
Development
Like Doom, Heretic was developed on NeXTSTEP. John Romero helped Raven employees set up the development computers, and taught them how to use id's tools and Doom engine.
Release
Shadow of the Serpent Riders
The original version of Heretic was only available through shareware registration (i.e. mail order) and contained three episodes. The retail version, Heretic: Shadow of the Serpent Riders, was distributed by GT Interactive in 1996, and featured the original three episodes and two additional episodes: The Ossuary, which takes the player to the shattered remains of a world conquered by the Serpent Riders several centuries ago, and The Stagnant Demesne, where the player enters D'Sparil's birthplace. This version was the first official release of Heretic in Europe. A free patch was also downloadable from Raven's website to update the original Heretic with the content found in Shadow of the Serpent Riders.
Along with the two full additional episodes, Shadow of the Serpent Riders contains 3 additional levels in a third additional episode (unofficially known as Fate's Path) which is inaccessible without the use of cheat codes. The first of these three levels can be accessed by typing the cheat ("ENGAGE61"). The first two levels are fully playable, but the third level does not have an exit so the player is unable to progress further.
Source release
On January 11, 1999, the source code of the game engine used in Heretic was published by Raven Software under a license that granted rights to non-commercial use, and was re-released under the GNU GPL-2.0-only on September 4, 2008. This resulted in ports to Linux, Amiga, Atari, and other operating systems, and updates to the game engine to utilize 3D acceleration. The shareware version of a console port for the Dreamcast was also released.
Reception
Heretic and Hexen shipped a combined total of roughly 1 million units by August 1997.
Heretic received mixed reviews, garnering an aggregated score of 62% on GameRankings and 78% on PC Zone.
Next Generation reviewed the PC version of the game, and stated that "if you're only going to get one action game in the next couple of months, this is the one".
While remarking that Heretic is a thinly-veiled clone of Doom, and that its being released in Europe after its sequel and with Quake due out shortly makes it somewhat outdated, Maximum nonetheless regarded it as an extremely polished and worthwhile purchase. They particularly highlighted the two additional episodes of the retail version, saying they offer a satisfying challenge even to first person shooter veterans and are largely what make the game worth buying.
In 1996, Computer Gaming World listed being turned into a chicken as #3 on its list of "the 15 best ways to die in computer gaming".
Legacy
Heretic has received three sequels: Hexen: Beyond Heretic, Hexen II, and Heretic II. Following ZeniMax Media's acquisition of id Software, the rights to the series have been disputed between both id and Raven Software; Raven's parent company Activision holds the developing rights, while id holds the publishing rights to the first three games.
The game was re-released for Windows on Steam on August 3, 2007.
Further homages to the series have been made in other id Software titles; In 2009's Wolfenstein, which Raven Software developed, Heretic'''s Tomes of Power are collectible power-ups found throughout the game. The character Galena from Quake Champions wears armor bearing the icon of the Serpent Riders.
In 2014, Raven co-founder Brian Raffel had expressed interest in making a sequel to the Heretic'' series. Rather than licensing it to other developers, he wants Raven to do it themselves.
References
External links
1994 video games
Acorn Archimedes games
Action-adventure games
Commercial video games with freely available source code
Cooperative video games
Dark fantasy video games
Doom engine games
DOS games
First-person shooters
Heretic and Hexen
Id Software games
Classic Mac OS games
Multiplayer and single-player video games
Raven Software games
Sprite-based first-person shooters
Video games about magic
Video games developed in the United States
Video games with digitized sprites
Video games with expansion packs
Games commercially released with DOSBox |
1032113 | https://en.wikipedia.org/wiki/Portable%20Application%20Description | Portable Application Description | PAD or Portable Application Description is a machine-readable document format and specification designed by the Association of Software Professionals and introduced in 1998. The PAD specification is utilized by more than 6,000 software publishers of downloadable applications covering the Windows, OS X, and Linux operating systems. PAD is a worldwide registered trademark of the Association of Software Professionals and managed by the ASP PAD Committee.
PAD allows software authors to provide standardized product descriptions and specifications to online sources in a standard way, using a simple XML schema that allows webmasters and program librarians to automate new program listings and update existing listings in their catalog. PAD saves time for both authors and webmasters, while allowing the specification to support the latest changes to operating systems and hardware.
PAD files most commonly have .XML or .PAD file name extension. PAD uses a simplified XML syntax that does not use name/value pairs in tags. All tags are attribute-free. The official PAD specification uses unique tags. To extract the fields in the official specification, it is not necessary to descend through the tag path. If multiple languages are represented in a single PAD file, then correct parsing does require descending through the tag path because leaf tags are duplicated for each language supported.
Each field in the specification has a regular expression associated with it. The regular expression acts as a constraint on the field: if it matches, the field value is legal and if it fails to match, the field and the PAD file as a whole do not conform to specification. Only files where all fields in the file pass validation are properly called PAD files.
The most current specification, version 4.00, was announced on December 1, 2012, replacing the prior version, 3.11 which was previously announced on June 12, 2010. The version 4.0 specification replaces v3.1 and includes replacement of the ASP PADGEN freeware tool with a web-based solution (still early beta), AppVisor.com. The AppVisor platform provides a complete authoring, editing, validation, publication, hosting and submission solution to utilize the latest and most current version of the PAD Specification. The submission is not fully automated. Each application is reviewed individually by a human before being rejected/accepted. The review can take several weeks. In case the reviewer rejects the application it will provide a list of web site changes that the applicant should perform in order to get his application accepted.
Update to V4.0
The current version of PAD v4.0 was developed with the input and feedback of supporting members of the Association of Software Professionals, as it has been since 1998.
As part of the major upgrade to v4.0, ASP withdrew all its free tools while also formally requesting that all PAD editing, submission and related third party software, be removed or eliminated. In place of these prior third party and freeware tools, a new web-based platform was developed for PAD Specification by AppVisor.com. AppVisor allows publishers to upgrade their v3.1 PAD files to v4.0.
However, the generated PAD file cannot be exported but only submitted directly to some directories. Submitting the XML Pad to websites costs $150 and approving the application/submission costs $36 per year.
Some software download and directory sites (BrotherSoft.com, Softonic.com, CNET Download.com, Softpedia.com) accept the PAD v4.0 format in addition to the older formats.
Controversy
The current version of PAD is very restrictive and only one for-profit company has control over the PAD Repository system (Rudenko Software). Software producers cannot submit their PAD files to the repository without purchasing submission service fee, also PAD certification fees were added to the total costs. The manual review process slows down the entire submission process and nothing can be done to speed it up since it's not up to the software maker. All the PAD files are hosted on the PAD Repository site and the direct link to the PAD file were removed from the new PAD format. There is no other alternative available to the developers since abandoning PADGen software and introducing a restrictive new PAD format.
Fortunately, many websites still accept the old XML PAD format.
See also
Submission software
External links
PAD site of the Association of Software Professionals
PAD 4.0 Specification
AppVisor — The Official PAD 4.0 Online Platform
— The Official PAD Repository
— The Official PAD Wiki
References
Computer file formats
XML |
440894 | https://en.wikipedia.org/wiki/Boeing%20E-4 | Boeing E-4 | The Boeing E-4 Advanced Airborne Command Post (AACP), the current "Nightwatch" aircraft, is a strategic command and control military aircraft operated by the United States Air Force (USAF). The E-4 series are specially modified from the Boeing 747-200B for the National Emergency Airborne Command Post (NEACP) program. The E-4 serves as a survivable mobile command post for the National Command Authority, namely the President of the United States, the Secretary of Defense, and successors. The four E-4Bs are operated by the 1st Airborne Command and Control Squadron of the 595th Command and Control Group located at Offutt Air Force Base, near Omaha, Nebraska. An E-4B when in action is denoted a "National Airborne Operations Center".
Development
Two of the original 747-200 airframes were originally planned to be commercial airliners. When the airline did not complete the order, Boeing offered the airframes to the United States Air Force as part of a package leading to a replacement for the older EC-135J National Emergency Airborne Command Post (NEACP). Under the 481B NEACP program the Air Force Electronic Systems Division awarded Boeing a contract in February 1973 for two unequipped aircraft, designated E-4A, powered by four P&W JT9D engines, to which a third aircraft was added in July 1973. The first E-4A was completed at the Boeing plant outside Seattle, Washington in 1973. E-Systems won the contract to install interim equipment in these three aircraft, and the first completed E-4A was delivered in July 1973 to Andrews Air Force Base, Maryland. The next two were delivered in October 1973 and October 1974. The third E-4 differed by being powered by the GE F103 engine, which was later made standard and retrofitted to the previous two aircraft. The A-model effectively housed the same equipment as the EC-135, but offered more space and an ability to remain aloft longer than an EC-135.
In November 1973, it was reported that the program cost was estimated to total $548 million for seven 747s, six as operational command posts and one for research and development. In December 1973, a fourth aircraft was ordered; it was fitted with more advanced equipment, resulting in the designation E-4B. On 21 December 1979, Boeing delivered the first E-4B (AF Serial Number 75-0125), which was distinguished from the earlier version by the presence of a large streamlined radome on the dorsal surface directly behind the upper deck. This contains the aircraft's SHF satellite antenna.
By January 1985 all three E-4As had been retrofitted to E-4B models. The E-4B offered a vast increase in communications capability over the previous model and was considered to be 'hardened' against the effects of nuclear electromagnetic pulse (EMP) from a nuclear blast. Hardening the aircraft meant that all equipment and wiring on board was shielded from EMP.
The E-4B fleet has an estimated roll-out cost of approximately US$250 million each. In 2005 the Air Force awarded Boeing a five-year, US$2 billion contract for the continued upgrade of the E-4B fleet. In addition to the purchase and upgrade costs, the E-4 costs nearly $160,000 per hour for the Air Force to operate.
Design
The E-4B is designed to survive an EMP with systems intact and has state-of-the-art direct fire countermeasures. Although many older aircraft have been upgraded with glass cockpits, the E-4B still uses traditional analog flight instruments, as they are less susceptible to damage from an EMP blast.
The E-4B is capable of operating with a crew up to 112 people including flight and mission personnel, the largest crew of any aircraft in US Air Force history. With in-flight aerial refueling it is capable of remaining airborne for a considerable period, limited only by consumption of the engines' lubricants. In a test flight for endurance, the aircraft remained airborne and fully operational for 35.4 hours, however it was designed to remain airborne for a full week in the event of an emergency. It takes two fully loaded KC-135 tankers to fully refuel an E-4B. The E-4B has three operational decks: upper, middle, and lower.
Middle and upper decks
The flight deck contains stations for the pilot, co-pilot, and flight engineer, plus a special navigation station not normally found on commercial Boeing 747s. A lounge area and sleeping quarters for flight and maintenance crews are located aft of the flight deck. The flight crew consists of an aircraft commander, co-pilot, navigator, and flight engineer.
The middle deck contains the conference room, which provides a secure area for conferences and briefings. It contains a conference table for nine people. Aft of the conference room is a projection room serving the conference room and the briefing room. The projection room had the capability of projecting computer graphics, overhead transparencies, or 35 mm slides to the conference room and/or the briefing room but have since been modernized with flat screen displays.
The battle staff includes various controllers, planners, launch system officers, communications operators, a weather officer, administrative and support personnel, and a chief of battle staff. The Operation Looking Glass missions were commanded by a general officer with two staff officers, while the National Airborne Operations Center (NAOC) may rendezvous and embark a member of the National Command Authority (NCA) from an undisclosed location. There are at least 48 crew aboard any E-4B mission.
Behind the briefing room is the operations team area containing the automatic data processing equipment and seats and console work areas for 29 staff members. The consoles are configured to provide access to or from the automated data processing, automatic switchboard, direct access telephone and radio circuits, direct ("hot") lines, monitor panel for switchboard lines, staff, and operator inter-phone and audio recorder.
The aft compartment at the end of the main deck is the Technical Control (Tech Control) area. This area was the nerve center for all communications and communications technicians. Typically 3 of the 6 crew positions were occupied here by specialized US Air Force technicians that were responsible for the proper monitoring and distribution of all communications power, cooling, and reliability. The Technical Controller No. 1 (Tech 1, TC1) was the direct interface with the aircraft Flight Engineer and Flight Crew. This position was also the main focal point for all communications related issues. The Technical Controller No. 2 (Tech 2, TC2) was responsible for maintaining all ultra high frequency communications between the aircraft and the Nightwatch GEP (Ground Entry Points). These GEP's provided 12 voice lines to the aircraft which were used in the day-to-day operations of the mission. Secure Voice was also provided. The SHF Operator (or technician) maintained the SHF satellite link and provided other worldwide communications services probably having replaced a lot of the UHF capabilities.
The rest area, which occupies the remaining portion of the aft main deck, provides a rest and sleeping area for the crew members. The rest area contains storage for food and is also used for religious ceremonies.
Within the forward entry area is the main galley unit and stairways to the flight deck and to the forward lower equipment area. This area contains refrigerators, freezers, two convection ovens, and a microwave oven to give stewards the capability to provide more than 100 hot meals during prolonged missions. Additionally, four seats are located on the left side of the forward entry area for the security guards and the stewards.
Behind the forward entry area is the National Command Authority (NCA) area, which is designed and furnished as an executive suite. It contains an office, a lounge, a sleeping area, and a dressing room. Telephone instruments in this area provide the NCA with secure and clear worldwide communications.
The briefing room contains a briefing table with three executive seats, eighteen additional seats, a lectern, and two 80-inch flat screen LED monitors flush mounted to the partition.
The communications control area is divided into a voice area and a data area. The voice area, located on the right side of the compartment, contains the radio operator's console, the semi-automatic switchboard console, and the communication officer's console. The data area, located on the left side of the area, contains the record communications console, record data supervisor's console, high speed DATA/AUTODIN/AFSAT console, and LF/VLF control heads. The E-4B can communicate with the ground over a wide range of frequencies covering virtually the entire radio communications spectrum from 14 kHz to 8.4 GHz. Ground stations can link the E-4B into the main US ground-based communications network.
The flight avionics area contains the aircraft systems power panels, flight avionics equipment, liquid oxygen converters, and storage for baggage and spare parts.
Lower Lobe
The forward lower equipment room contains the aircraft's water supply tanks, 1200 kVA electrical power panels, step down transformers, VLF transmitter, and SHF SATCOM equipment. An AC/DC powered hydraulic retractable airstair is located in the forward right side of the forward lower equipment area, installed for airplane entry and exit. In the event of an emergency, the air stair can be jettisoned. The aft lower lobe contains the maintenance console and mission specific equipment.
The lower trailing wire antenna (TWA) area contains the aircraft's TWA reel – which is used by up to 13 communications links – the antenna operator's station, as well as the antenna reel controls and indicators. Much attention has been given to hardening this area against EMP, especially as the TWA, essential for communicating with Ohio-class ballistic missile submarines, is also particularly effective in picking up EMP.
Operational history
The E-4 fleet was originally deployed in 1974, when it was termed National Emergency Airborne Command Post (NEACP) (often pronounced "kneecap"). The aircraft was to provide a survivable platform to conduct war operations in the event of a nuclear attack. Early in the E-4's service, the media dubbed the aircraft as "the doomsday planes". The E-4 was also capable of operating the "Looking Glass" missions of the Strategic Air Command (SAC).
The aircraft were originally stationed at Andrews Air Force Base in Maryland, so that the U.S. president and secretary of defense could access them quickly in the event of an emergency. The name "Nightwatch" originates from the richly detailed Rembrandt painting, The Night Watch, that depicts local townsfolk protecting a town; it was selected by the Squadron's first commanding officer. Later, the aircraft were moved to Offutt Air Force Base where they would be safer from attack. Until 1994, one E-4B was stationed at Andrews Air Force Base at all times so the President could easily board it in times of world crisis.
The NEACP aircraft originally used the static call sign "Silver Dollar"; this call sign faded from use when daily call signs were put in use. When a President boards the E-4, its call sign becomes "Air Force One". The E-4B also serves as the Secretary of Defense's preferred means of transportation when traveling outside the U.S. The spacious interior and sophisticated communications capability provided by the aircraft allow the Secretary's senior staff to work for the duration of the mission.
With the adoption of two highly modified Boeing 747-200Bs (Air Force designation VC-25A) to serve as Air Force One in 1989 and the end of the Cold War, the need for NEACP diminished. In 1994, NEACP began to be known as NAOC, and it took on a new responsibility: ferrying Federal Emergency Management Agency crews to natural disaster sites and serving as a temporary command post on the ground until facilities could be built on site. Evidently no E-4B was employed during the Hurricane Katrina disaster of 2005, though one E-4B was used by FEMA following Hurricane Opal in 1995.
One E-4B is kept on alert at all times. The "cocked" or "on alert" E-4B is manned 24 hours a day with a watch crew on board guarding all communications systems awaiting a launch order (klaxon launch). Those crew members not on watch would be in the alert barracks, gymnasium, or at other base facilities. The 24-hour alert status at Andrews AFB ended when President Clinton ordered the aircraft to remain at Offutt unless needed, though relief crews remain based at Andrews and Wright-Patterson Air Force Base.
September 2001 to present
On 11 September 2001, an aircraft closely resembling an E-4B was spotted and filmed orbiting the Washington, D.C., area by news outlets and civilians, after the attack on the Pentagon. In his book Black Ice, author Dan Verton identifies this aircraft as an E-4B taking part in an operational exercise, and the exercise was canceled when the first plane struck the World Trade Center. Air traffic control recordings and radar data indicate this E-4B call sign VENUS77 became airborne just before 9:44 am, circled north of the White House during its climb, and then tracked to the south of Washington, D.C., where it held in a holding pattern. In 2008 Brent Scowcroft explained in a book, that he was on this plane to go on an inspection tour to one unspecified nuclear weapons site as chairman of a DoD team called "End to End review".
In January 2006, Secretary of Defense Donald Rumsfeld announced a plan to retire the entire E-4B fleet starting in 2009. This was reduced to retiring one of the aircraft in February 2007. The next Secretary of Defense, Robert Gates, reversed this decision in May 2007. This is due to the unique capabilities of the E-4B, which cannot be duplicated by any other single aircraft in Air Force service, and the cancellation in 2007 of the E-10 MC2A, which was considered as a successor to the EC-135 and E-8 aircraft, and could also perform many of the same tasks of the E-4B. As of the 2015 federal budget there were no plans for retiring the E-4B. The E-4B airframe has a usable life of 115,000 hours and 30,000 cycles, which would be reached in 2039; the maintenance limiting point would occur sometime in the 2020s.
All four produced are operated by the U.S. Air Force, and are assigned to the 1st Airborne Command Control Squadron (1ACCS) of the 595th Command and Control Group at Offutt Air Force Base, Nebraska. Operations are coordinated by the United States Strategic Command.
When the President travels outside of North America using a VC-25A as Air Force One, an E-4B will deploy to a second airport in the vicinity of the President's destination, to be readily available in the event of a world crisis or an emergency that renders the VC-25A unusable. When President Barack Obama visited Honolulu, Hawaii, an E-4B was often stationed 200 miles away at Hilo International Airport on Hawaii Island.
In June 2017, two of the aircraft were damaged by a tornado that struck Offutt AFB, having been struck by falling debris after the tornado damaged the hangar the aircraft were stationed in. They were out of service for eleven weeks while repairs took place. The E-4B aircraft have been based at the nearby Lincoln Airport Air National Guard three times: 2006, then 2019 during the Missouri flood, and 2021–22 subsequent runway replacement.
Operators
United States Air Force – Global Strike Command
595th Command and Control Group – Offutt AFB, Nebraska
1st Airborne Command and Control Squadron
Variants
E-4A Three aircraft produced (s/n 73–1676, 73–1677, and 74-0787), powered by Pratt & Whitney JT9D-7R4G2 engines. No bulge to house equipment on top of fuselage. These were later converted to E-4Bs.
E-4B One built (s/n 75-0125) and equipped with 52,500-lb CF6-50E2 engines. Has nuclear electromagnetic pulse protection, nuclear and thermal effects shielding, advanced electronics, and a wide variety of communications equipment.
Specifications (E-4B)
Notable appearances in media
The E-4B plays a prominent role in two motion pictures. In the 1990 HBO film By Dawn's Early Light, following a nuclear strike by the Soviets, the aircraft serves as a flying platform for the presumed president, the ex–Secretary of the Interior, who is played by Darren McGavin. The aircraft is pursued by a Boeing EC-135 "Looking Glass", which successfully intercepts it. In the 2002 motion picture The Sum of All Fears, the president and his staff travel on an E-4B following the detonation of a nuclear weapon by terrorists. In the novel, the Vice President and his family are aboard the NEACP after terrorists detonate a nuclear bomb in Denver while the President and his National Security Advisor are stuck at Camp David during a blinding snowstorm. The E-4's program, Project Nightwatch, was referenced in the book The Fallout, by S. A. Bodeen.
National Geographic produced a television special on doomsday planning of the United States which includes footage from inside an E-4 during a drill.
See also
References
References
Bibliography
Bowers, Peter M. Boeing Aircraft since 1916. London: Putnam, 1989. .
Francillon, René J. "Doomsday 747s: The National Airborne Operations Center". Air International, December 2008. Key Publishing, Syamford, Lincs, UK. pp. 32–37.
Jenkins, Dennis R. Boeing 747-100/200/300/SP (AirlinerTech Series, Vol. 6). Specialty Press, 2000. .
Lloyd, Alwin T., A Cold War Legacy: A Tribute to Strategic Air Command- 1946–1992. Missoula, Montana, United States: Pictorial Histories Publications Company, 1999. .
Michell, Simon. Jane's Civil and Military Upgrades 1994–95. Coulsden, Surrey, UK: Jane's Information Group, 1994. .
External links
USAF E-4 fact sheet
E-4 product page and history page on Boeing.com
E-4 page on GlobalSecurity.org
E-4 page on TheAviationZone.com
E-4
1970s United States command and control aircraft
Quadjets
United States nuclear command and control
Continuity of government in the United States
Aircraft first flown in 1973
E-4
Double-deck aircraft |
47848393 | https://en.wikipedia.org/wiki/B-Scada | B-Scada | B-Scada (or Beyond–Scada) is a company based in Crystal River, Florida. B-Scada's product offerings include on-premises Supervisory Control and Data Acquisition (SCADA) and Human Machine Interface (HMI) software platforms, a cloud-based Internet of Things (IoT) software platform, and wireless sensing hardware. It is one of the first companies to use data modeling in SCADA systems to create virtual representations of real world physical assets.
Data modeling
B-Scada uses data models as the basis of end users toolkits, which allow representations of assets with immediate interactions. It provides “templating,” where a data model is created for a type of object instead of for a specific object. Conventional HMI and SCADA products bind data from programmable logic controllers (PLCs) or other data sources directly to the graphics. Data modeling in HMI/SCADA allows the virtualized model of assets to be bound to the HMI/SCADA screens. The PLC or OPC Server memory addresses plus any additional associated information can then be referenced at run time, allowing one generic data model template to be used for many different specific assets.
History
B-Scada was founded as Mobiform Software in 2003 by Ron DeSerranno, former Senior Software Engineer of Rockwell Software, Inc./Dynapro, Inc., where he served as the Development Lead and Architect for its industrial automation product, RSView.
Beyond SCADA
In October 2012, Mobiform Software announced it had changed its name to B-Scada (Beyond SCADA). It continued trading under the stock ticker symbol MOBS until announcing in October 2014 its new stock ticker symbol, SCDA.
Fuzz Mobile Marketing Solution
In July 2019, B-Scada launched Fuzz Mobile Marketing Solutions Inc., an online platform for sending bulk SMS messages.
References
Industrial automation software
Software companies based in Florida
Software companies of the United States |
5939149 | https://en.wikipedia.org/wiki/MySQL%20Workbench | MySQL Workbench | MySQL Workbench is a visual database design tool that integrates SQL development, administration, database design, creation and maintenance into a single integrated development environment for the MySQL database system. It is the successor to DBDesigner 4 from fabFORCE.net, and replaces the previous package of software, MySQL GUI Tools Bundle.
History
fabFORCE.net DBDesigner4
DBDesigner4 is an open source visual database design and querying tool for the MySQL database released under the GPL. It was written in 2002/2003 by the Austrian programmer Michael G. Zinner for his fabFORCE.net platform using Delphi 7 / Kylix 3.
While being a physical-modeling only tool DBDesigner4 offers a comprehensive feature set including reverse engineering of MySQL databases, model-to-database synchronization, model poster printing, basic version control of schema models and a SQL query builder. It is available for MS Windows, Mac OS X and Linux.
In late 2003, Zinner was approached by representatives from MySQL AB and joined the company to take over the development of graphical user interface (GUI) tools for MySQL. This led to the creation of the MySQL GUI Tools Bundle.
MySQL GUI Tools Bundle
The MySQL GUI Tools Bundle is a cross-platform open source suite of desktop applications for the administration of MySQL database servers, and for building and manipulating the data within MySQL databases. It was developed by MySQL AB and later by Sun Microsystems and released under the GPL. Development on the GUI Tools bundle has stopped, and is now only preserved under the Download Archives of the MySQL site.
The GUI Tools bundle has been superseded by MySQL Workbench, and reached its End-of-Life with the beta releases of MySQL Workbench 5.2. However, the MySQL Support team continued to provide assistance for the bundle until June 30, 2010.
Releases
The first preview version of MySQL Workbench was released in September 2005, and was not included in the MySQL GUI Tools Bundle. Development was started again in 2007 and MySQL Workbench was set to become the MySQL GUI flagship product.
Version numbering was started at 5.0 to emphasise that MySQL Workbench was developed as the successor to DBDesigner4.
MySQL Workbench 5.0 and 5.1
MySQL Workbench 5.0 and 5.1 are specialized visual database design tools for the MySQL database. While MySQL Workbench 5.0 was a MS Windows-only product, cross-platform support was added to MySQL Workbench 5.1 and later.
MySQL Workbench 5.2
Starting with MySQL Workbench 5.2 the application has evolved to a general database GUI application. Apart from physical database modeling it features an SQL Editor, database migration tools, and a database server administration interface, replacing the old MySQL GUI Tools Bundle.
MySQL Workbench 6.0
On May 22, 2013, the MySQL Workbench Team announced that they were working on Version 6.0. The first public beta, labeled version 6.0.2, was released on June 14, 2013, and the first general-availability release was made on August 12, 2013.
MySQL Workbench 6.1
On January 23, 2014 the MySQL Workbench Team announced its first public beta release of Version 6.1. The first general-availability release was made on March 31, 2014. New features include improved Visual Explain output, a Performance dashboard, Performance Schema support, additional query result views, and MSAA support.
MySQL Workbench 6.2
On August 19, 2014, the MySQL Workbench Team announced its first public beta release of Version 6.2. The first general-availability release was made on September 23, 2014. New features are shortcut buttons for common operations, "pinning" of the results tab, Microsoft Access Migration, MySQL Fabric Integration, Spatial View Panel to visualize spatial and geometry data, Geometry Data Viewer, Result Set Width, SQL editor tabs are properly saved, Shared Snippets, a new Run SQL Script dialog, Model Script Attachments, Client Connections management has a new "Show Details" window where more information about connections, locks, and attributes is displayed, performance columns can display sizes in KB, MB, or GB, the migration wizard can resume operations of data copying if interrupted, MySQL connection password is remembered across the MySQL Workbench session.
MySQL Workbench 6.3
On March 5, 2015, the MySQL Workbench Team announced its first public beta release of Version 6.3. The first general-availability release was made on April 23, 2015. New features include a "fast migration" option to migrate the data from the command-line instead of the GUI, a SSL certificate generator, improved SQL auto-completion, a new table data import and export wizard, and MySQL Enterprise Firewall support. Version 6.3.8, MySQL Workbench for MacOS has incompatibilities with MacOS Sierra. Version 6.3.9 is compatible with MacOS Sierra, however it doesn't work on MacOS High Sierra. MacOS High Sierra users need to run version 6.3.10.
MySQL Workbench 8.0
On April 5, 2018, the MySQL Workbench Team announced the first public release of version 8.0.11 as a Release Candidate (RC) together with MySQL Community Server 8.0.11. The first General Availability (GA) release appeared on July 27, 2018 again together with the server following the new policy for aligning version numbers across most of MySQL products. MySQL Workbench now uses ANTLR4 as backend parser and has a new auto-completion engine that works with object editors (triggers, views, stored procedures, and functions) in the visual SQL editor and in models. The new versions add support for new language features in MySQL 8.0, such as common-table expressions and roles. There's also support for invisible indexes and persisting of global system variables. The new default authentication plugin caching_sha2_password in MySQL 8.0 is now supported by Workbench, so resetting user accounts to other authentication types is no longer necessary when connecting to the latest servers. Administrative tabs are updated with the latest configuration options and the user interface was made more consistent between the tabs.
Features
Prominent features of MySQL Workbench are:
General
Database Connection & Instance Management
Wizard driven action items
Fully scriptable with Python and Lua
Support for custom plugins
MSAA (Windows Accessibility API) compliant
Supports MySQL Enterprise features (Audit Log, Firewall, and Enterprise Backup)
SQL Editor
Schema object browsing, inspection, and search
SQL syntax highlighter and statement parser
SQL code completion and context sensitive help
Multiple and editable result sets
Visual EXPLAIN
SQL snippets collections
SSH connection tunneling
Unicode support
Data modeling
ER diagramming
Drag'n'Drop visual modeling
Reverse engineering from SQL Scripts and live database
Forward engineering to SQL Scripts and live database
Schema synchronization
Printing of models
Import from fabFORCE.net DBDesigner4
Database administration
Start and stop of database instances
Instance configuration
Database account management
Instance variables browsing
Log file browsing
Data dump export/import
Performance monitoring
Performance Schema metrics
MySQL instance dashboard
Query statistics
Database migration
Any ODBC compliant database
Native support: Microsoft SQL Server, PostgreSQL, SQL Anywhere, SQLite, and Sybase ASE
Licensing and editions
MySQL Workbench is the first MySQL family of products that offer two different editions - an open source and a proprietary edition. The "Community Edition" is a full featured product that is not crippled in any way. Being the foundation for all other editions it will benefit from all future development efforts. The proprietary "Standard Edition" extends the Community Edition with a series of modules and plugins.
As this business decision was announced soon after the takeover of MySQL by Sun Microsystems, this has caused speculation in the press about the future licensing of the MySQL database.
Community reception and reviews
Since its introduction MySQL Workbench has become popular within the MySQL community. It is now the second most downloaded product from the MySQL website with more than 250,000 downloads a month. Before that it was voted Database Tool of the Year 2009 on Developer.com.
MySQL Workbench has been reviewed by the open source community and print magazines.
See also
Comparison of database tools
References
External links
MySQL Workbench Community blog
MySQL
Database administration tools
Data modeling tools
Lua (programming language)-scriptable software
Software that uses Scintilla |
243410 | https://en.wikipedia.org/wiki/Uname | Uname | uname (short for unix name) is a computer program in Unix and Unix-like computer operating systems that prints the name, version and other details about the current machine and the operating system running on it.
History
The uname system call and command appeared for the first time in PWB/UNIX. Both are specified by POSIX. The GNU version of uname is included in the "sh-utils" or "coreutils" packages. uname itself is not available as a standalone program. The version of uname bundled in GNU coreutils was written by David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Related and similar commands
Some Unix variants, such as AT&T UNIX System V Release 3.0, include the related setname program, used to change the values that uname reports.
The ver command found in operating systems such as DOS, OS/2 and Microsoft Windows is similar to the uname command.
Examples
On a system running Darwin, the output from running uname with the -a command line argument might look like the text below:
$ uname -a
Darwin Roadrunner.local 10.3.0 Darwin Kernel Version 10.3.0: Fri Feb 26 11:58:09 PST 2010; root:xnu-1504.3.12~1/RELEASE_I386 i386
The following table contains examples from various versions of uname on various platforms. Within the bash shell, the variable OSTYPE contains a value similar (but not identical) to the value of .
See also
List of Unix commands
lsb_release
ver (command)
Footnotes
External links
Unix SUS2008 utilities |
863095 | https://en.wikipedia.org/wiki/Internet%20security | Internet security | Internet security is a branch of computer security. It encompasses the Internet, browser security, web site security, and network security as it applies to other applications or operating systems as a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet is an inherently insecure channel for information exchange, with high risk of intrusion or fraud, such as phishing, online viruses, trojans, ransomware and worms.
Many methods are used to combat these threats, including encryption and ground-up engineering.
Threats
Malicious software
Malicious software comes in many forms, such as viruses, Trojan horses, spyware, and worms.
Malware, a portmanteau of malicious software, is any software used to disrupt computer operation, gather sensitive information, or gain access to private computer systems. Malware is defined by its malicious intent, acting against the requirements of the computer user, and does not include software that unintentionally causes harm due to some deficiency. The term badware applies to both malware and unintentionally harmful software.
A botnet is a network of computers that have been taken over by a robot or bot that performs large-scale malicious acts for its creator.
Computer viruses are programs that can replicate their structures or effects by infecting other files or structures on a computer. The typical purpose of a virus is to take over a computer to steal data.
Computer worms are programs that can replicate themselves throughout a computer network.
Ransomware is a type of malware that restricts access to the computer system that it infects, and demands a ransom in order for the restriction to be removed.
Scareware is a program of usually limited or no benefit, containing malicious payloads, that is sold via unethical marketing practices. The selling approach uses social engineering to cause shock, anxiety, or the perception of a threat, generally directed at an unsuspecting user.
Spyware refers to programs that surreptitiously monitor activity on a computer system and report that information to others without the user's consent.
One particular kind of spyware is key logging malware. Often referred to as keylogging or keyboard capturing, is the action of recording (logging) the keys struck on a keyboard.
A Trojan horse, commonly known as a Trojan, is a general term for malware that pretends to be harmless, so that a user will be convinced to download it onto the computer.
Denial-of-service attacks
A denial-of-service attack (DoS) or distributed denial-of-service attack (DDoS) is an attempt to make a computer resource unavailable to its intended users. It works by making so many service requests at once that the system is overwhelmed and becomes unable to process any of them. DoS may target cloud computing systems. According to business participants in an international security survey, 25% of respondents experienced a DoS attack in 2007 and another 16.8% in 2010. DoS attacks often use bots (or a botnet) to carry out the attack.
Phishing
Phishing targets online users in an attempt to extract sensitive information such as passwords and financial information. Phishing occurs when the attacker pretends to be a trustworthy entity, either via email or a web page. Victims are directed to web pages that appear to be legitimate, but instead route information to the attackers. Tactics such as email spoofing attempt to make emails appear to be from legitimate senders, or long complex URLs hide the actual website. Insurance group RSA claimed that phishing accounted for worldwide losses of $10.8 billion in 2016.
Application vulnerabilities
Applications used to access Internet resources may contain security vulnerabilities such as memory safety bugs or flawed authentication checks. Such bugs can give network attackers full control over the computer.
A widespread web-browser application vulnerability is the cross-origin resource sharing (CORS) vulnerability - for maximum security and privacy, make sure to adopt adequate countermeasures against it (such as the patches provided for WebKit-based browsers).
Countermeasures
Network layer security
TCP/IP protocols may be secured with cryptographic methods and security protocols. These protocols include Secure Sockets Layer (SSL), succeeded by Transport Layer Security (TLS) for web traffic, Pretty Good Privacy (PGP) for email, and IPsec for the network layer security.
Internet Protocol Security (IPsec)
IPsec is designed to protect TCP/IP communication in a secure manner. It is a set of security extensions developed by the Internet Engineering Task Force (IETF). It provides security and authentication at the IP layer by transforming data using encryption. Two main types of transformation form the basis of IPsec: the Authentication Header (AH) and ESP. They provide data integrity, data origin authentication, and anti-replay services. These protocols can be used alone or in combination.
Basic components include:
Security protocols for AH and ESP
Security association for policy management and traffic processing
Manual and automatic key management for the Internet key exchange (IKE)
Algorithms for authentication and encryption
The algorithm allows these sets to work independently without affecting other parts of the implementation. The IPsec implementation is operated in a host or security gateway environment giving protection to IP traffic.
Threat modeling
Threat Modeling tools helps you to proactively analyze the cyber security posture of a system or system of systems and in that way prevent security threats.
Multi-factor authentication
Multi-factor authentication (MFA) is an access control method of in which a user is granted access only after successfully presenting separate pieces of evidence to an authentication mechanism – two or more from the following categories: knowledge (something they know), possession (something they have), and inherence (something they are). Internet resources, such as websites and email, may be secured using this technique.
Security token
Some online sites offer customers the ability to use a six-digit code which randomly changes every 30–60 seconds on a physical security token. The token has built-in computations and manipulates numbers based on the current time. This means that every thirty seconds only a certain array of numbers validate access. The website is made aware of that device's serial number and knows the computation and correct time to verify the number. After 30–60 seconds the device presents a new random six-digit number to log into the website.
Electronic mail security
Background
Email messages are composed, delivered, and stored in a multiple step process, which starts with the message's composition. When a message is sent, it is transformed into a standard format according to RFC 2822. Using a network connection, the mail client sends the sender's identity, the recipient list and the message content to the server. Once the server receives this information, it forwards the message to the recipients.
Pretty Good Privacy (PGP)
Pretty Good Privacy provides confidentiality by encrypting messages to be transmitted or data files to be stored using an encryption algorithm such as Triple DES or CAST-128. Email messages can be protected by using cryptography in various ways, such as the following:
Digitally signing the message to ensure its integrity and confirm the sender's identity.
Encrypting the message body of an email message to ensure its confidentiality.
Encrypting the communications between mail servers to protect the confidentiality of both message body and message header.
The first two methods, message signing and message body encryption, are often used together; however, encrypting the transmissions between mail servers is typically used only when two organizations want to protect emails regularly sent between them. For example, the organizations could establish a virtual private network (VPN) to encrypt communications between their mail servers. Unlike methods that only encrypt a message body, a VPN can encrypt all communication over the connection, including email header information such as senders, recipients, and subjects. However, a VPN does not provide a message signing mechanism, nor can it provide protection for email messages along the entire route from sender to recipient.
Message Authentication Code
A Message authentication code (MAC) is a cryptography method that uses a secret key to digitally sign a message. This method outputs a MAC value that can be decrypted by the receiver, using the same secret key used by the sender. The Message Authentication Code protects both a message's data integrity as well as its authenticity.
Firewalls
A computer firewall controls access to a single computer. A network firewall controls access to an entire network. A firewall is a security device — computer hardware or software — that filters traffic and blocks outsiders. It generally consists of gateways and filters. Firewalls can also screen network traffic and block traffic deemed unauthorized.
Web security
Firewalls restrict incoming and outgoing network packets. Only authorized traffic is allowed to pass through it. Firewalls create checkpoints between networks and computers. Firewalls can block traffic based on IP source and TCP port number. They can also serve as the platform for IPsec. Using tunnel mode, firewalls can implement VPNs. Firewalls can also limit network exposure by hiding the internal network from the public Internet.
Types of firewall
Packet filter
A packet filter processes network traffic on a packet-by-packet basis. Its main job is to filter traffic from a remote IP host, so a router is needed to connect the internal network to the Internet. The router is known as a screening router, which screens packets leaving and entering the network.
Stateful packet inspection
In a stateful firewall the circuit-level gateway is a proxy server that operates at the network level of an Open Systems Interconnect (OSI) model and statically defines what traffic will be allowed. Circuit proxies forward network packets (formatted data) containing a given port number, if the port is permitted by the algorithm. The main advantage of a proxy server is its ability to provide Network Address Translation (NAT), which can hide the user's IP address from the Internet, effectively protecting internal information from the outside.
Application-level gateway
An application-level firewall is a third generation firewall where a proxy server operates at the very top of the OSI model, the IP suite application level. A network packet is forwarded only if a connection is established using a known protocol. Application-level gateways are notable for analyzing entire messages rather than individual packets.
Browser choice
Web browser market share predicts the share of hacker attacks. For example, Internet Explorer 6, which used to lead the market, was heavily attacked.
Protections
Antivirus
Antivirus software can protect a programmable device by detecting and eliminating malware. A variety of techniques are used, such as signature-based, heuristics, rootkit, and real-time.
Password managers
A password manager is a software application that creates, stores and provides passwords to applications. Password managers encrypt passwords. The user only needs to remember a single master password to access the store.
Security suites
Security suites were first offered for sale in 2003 (McAfee) and contain firewalls, anti-virus, anti-spyware and other components. They also offer theft protection, portable storage device safety check, private Internet browsing, cloud anti-spam, a file shredder or make security-related decisions (answering popup windows) and several were free of charge.
History
At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla Corporation (founded by Mohamed Atalla) and Bunker Ramo Corporation (founded by George Bunker and Simon Ramo) introduced the earliest products designed for dealing with online security. Atalla later added its Identikey hardware security module, andj supported processing online transactions and network security. Designed to process bank transactions online, the Identikey system was extended to shared-facility operations. It was compatible with various switching networks, and was capable of resetting itself electronically to any one of 64,000 irreversible nonlinear algorithms as directed by card data information. In 1979, Atalla introduced the first network security processor (NSP).
See also
Comparison of antivirus software
Comparison of firewalls
Cyberspace Electronic Security Act (in the US)
Cybersecurity information technology list
Firewalls and Internet Security (book)
Goatse Security
Internet Crime Complaint Center
Identity Driven Networking
Internet safety
Network security policy
Usability of web authentication systems
Web literacy (Security)
References
External links
National Institute of Standards and Technology (NIST.gov) - Information Technology portal with links to computer- and cyber security
National Institute of Standards and Technology (NIST.gov) -Computer Security Resource Center -Guidelines on Electronic Mail Security, version 2
PwdHash Stanford University - Firefox & IE browser extensions that transparently convert a user's password into a domain-specific password.
Cybertelecom.org Security - surveying federal Internet security work
DSL Reports.com- Broadband Reports, FAQs and forums on Internet security, est 1999 |
1392108 | https://en.wikipedia.org/wiki/Slapt-get | Slapt-get | slapt-get is an APT-like package management system for Slackware. Slapt-get tries to emulate the features of Debian's (apt-get) as closely as possible.
Released under the terms of the GNU General Public License, slapt-get is free software.
Features
slapt-get builds functionality on top of the native Slackware package tools (installpkg, upgradepkg and removepkg) enabling package query, remote fetching, system updates, integrated changelog information, and many optional advanced features such as dependency resolution, package conflicts, suggestions, checksum and public key verification, and transfer resumption.
slapt-get uses the libcurl cURL library for transport. libcurl provides support for ftp, ftps, http, https, file:// and other resource types along with transfer resume for incomplete downloads. slapt-get also uses the GNU Privacy Guard library to validate signatures.
slapt-get provides a simple configuration file format that includes an exclusion mechanism for use with the system upgrade option as well as declarations for all desired package sources. Each package source can optionally be tagged with a specific priority in order to override the package version comparison and honor upstream software downgrades as might be the case when Slackware reverts to a previous version of a package.
Dependencies
slapt-get does not provide dependency resolution for packages included within the Slackware distribution. It does, however, provide a framework for dependency resolution in Slackware compatible packages similar in fashion to the hand-tuned method APT utilizes. Several package sources and Slackware based distributions take advantage of this functionality. Hard, soft, and conditional dependencies along with package conflicts and complementary package suggestions can be expressed using the slapt-get framework.
Adding dependency information requires no modification to the packages themselves. Rather, the package listing file, PACKAGES.TXT, is used to specify these relationships. This file is provided by Patrick Volkerding and is similar to the Packages.gz file in use by Debian. Several scripts are available to generate the PACKAGES.TXT file from a group of packages. The file format used by Patrick Volkerding is extended by adding a few extra lines per package. slapt-get then parses this file during source downloads. Typically, third party packages store the dependency information within the package itself for later extraction into the PACKAGES.TXT. The inclusion of this information within the Slackware package format does not inhibit the ability for Slackware pkgtools to install these packages. This information is silently ignored and discarded after the package is installed.
Package sources
slapt-get works with official Slackware mirrors and third party package repositories such as http://www.slacky.eu/. slapt-get looks for support files, PACKAGES.TXT and CHECKSUMS.md5, in the repository for package information. These files provide package names, versions, sizes (both compressed and uncompressed), checksums, as well as a package description. These files can be extended, as discussed in the previous section, to add dependency listings, conflict information, and package suggestions. These files can also proxy for other remote sources by specifying a MIRROR declaration for each package.
GSlapt
GSlapt is a GTK+ frontend to libslapt, the slapt-get library which provides advanced package management for Slackware and its derivatives. Inspired by the functionality present in Synaptic, Gslapt aims to bring the ease of use enjoyed by Debian and its derivatives to the Slackware world.
GSlapt was written primarily to supersede the vlapt (x)dialog slapt-get frontend used by VectorLinux.
Distributions
Besides Slackware, slapt-get and GSlapt are included by several other distributions, including:
Absolute Linux
Salix OS
Slamd64
VectorLinux
Wolvix
References
External links
Slapt-get on SlackWiki
Free package management systems
Linux package management-related software
Linux-only free software
Slackware |
42709337 | https://en.wikipedia.org/wiki/Gordon%E2%80%93Loeb%20model | Gordon–Loeb model | The Gordon–Loeb model is a mathematical economic model analyzing the optimal investment level in information security.
Investing to protect company data involves a cost that, unlike other investments, usually does not generate profit. It does, however, serve to prevent additional costs. Thus, it's important to compare how expensive it is to protect a specific set of data, with the potential loss in case said data is stolen, lost, damaged or corrupted. To draft this model, the company must possess knowledge of three parameters:
how much the data is worth;
how much the data is at risk;
the probability an attack on the data is going to be successful, or vulnerability.
These three parameters are multiplied together to provide the median money loss with no security investment.
From the model we can gather that the amount of money a company spends in protecting information should, in most cases, be only a small fraction of the predicted loss (for example, expected value of a loss following a security breach). Specifically, the model shows that it is generally inconvenient to invest in informatics security (including cybersecurity or computer security related activities) for amounts higher than 37% of the predicted loss. The Gordon–Loeb model also shows that, for a specific level of potential loss, the amount of resources to invest in order to protect an information set does not always increase with the increase in vulnerability of said set. Thus, companies can enjoy greater economic returns by investing in cyber/information security activities aimed to increase the security of data sets with a medium level of vulnerability. In other words, the investment in safeguarding a company's data reduces vulnerability with decreasing incremental returns.
The Gordon–Loeb Model was first published by Lawrence A. Gordon and Martin P. Loeb in their 2002 paper, in ACM Transactions on Information and System Security, entitled "The Economics of Information Security Investment" The paper was reprinted in the 2004 book Economics of Information Security. Gordon and Loeb are both professors at the University of Maryland's Robert H. Smith School of Business.
The Gordon–Loeb Model is one of the most well accepted analytical models for the economics of cyber security. The model has been widely referenced in the academic and practitioner literature. The model has also been empirically tested in several different settings. Research by mathematicians Marc Lelarge and Yuliy Baryshnikov generalized the results of the Gordon–Loeb Model.
The Gordon–Loeb model has been featured in the popular press, such as The Wall Street Journal and The Financial Times.
References
Data security
Mathematical economics |
7674562 | https://en.wikipedia.org/wiki/Oracle%20Linux | Oracle Linux | Oracle Linux (abbreviated OL, formerly known as Oracle Enterprise Linux or OEL) is a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006. It is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle's. It is also used by Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata and others.
Potential users can freely download Oracle Linux through Oracle's E-delivery service (Oracle Software Delivery Cloud) or from a variety of mirror sites, and can deploy and distribute it without cost. The company's Oracle Linux Support program aims to provide commercial technical support, covering Oracle Linux and existing RHEL or CentOS installations but without any certification from the former (i.e. without re-installation or re-boot). Oracle Linux had over 15,000 customers subscribed to the support program.
RHEL compatibility
Oracle Corporation distributes Oracle Linux with two alternative Linux kernels:
Red Hat Compatible Kernel (RHCK) identical to the kernel shipped in RHEL
Unbreakable Enterprise Kernel (UEK<ref>{{cite book
| last1 = Bach
| first1 = Martin
| title = Expert Consolidation in Oracle Database 12c
| url = https://books.google.com/books?id=85cQAwAAQBAJ
| series = SpringerLink : Bücher
| date = 23 January 2014
| publisher = Apress
| publication-date = 2013
| page = 139
| isbn = 9781430244295
| access-date = 2014-04-21
| quote = For a long time, the differences between Red Hat Linux and Oracle Linux were negligible. This was before Oracle released its own branch of the kernel - the so-called Unbreakable Linux Kernel (UEK).
}}</ref>) based on newer mainline Linux kernel versions, with Oracle's own enhancements for OLTP, InfiniBand, SSD disk access, NUMA-optimizations, Reliable Datagram Sockets (RDS), async I/O, OCFS2, and networking.
Oracle promotes Unbreakable Enterprise Kernel as having 100% compatibility with RHEL, even though this is essentially impossible to guarantee due to the kernel's ABI changing due to various factors, including the kernel being based on a newer version which has many thousands of differences from Red Hat's kernel. While the Linux kernel developers, upstream, try never to break userspace, it has happened before. Oracle's compatibility claims lead the user to conclude that third-party RHEL-certified applications will behave properly on the Oracle kernel, but it does not provide any reference to third-party documentation.
Hardware compatibility
Oracle Linux is certified on servers including from IBM, Hewlett-Packard, Dell, Lenovo, and Cisco. In 2010, Force10 announced support for Oracle VM Server for x86 and Oracle Linux. Oracle Linux is also available on Amazon EC2 as an Amazon Machine Image, and on Microsoft Windows Azure as a VM Image.
Oracle/Sun servers with x86-64 processors can be configured to ship with Oracle Linux.
In November 2017, Oracle announced Oracle Linux on the ARM platform with support for the Raspberry Pi 3, Cavium ThunderX and X-Gene 3.
Virtualization support
Under the Oracle Linux Support program, Oracle Linux supports KVM and Xen.
Other Oracle products are only supported under the Xen-based Oracle VM Server for x86.
Deployment inside Oracle Corporation
Oracle Corporation uses Oracle Linux extensively within Oracle Public Cloud, internally to lower IT costs. Oracle Linux is deployed on more than 42,000 servers by Oracle Global IT; the SaaS Oracle On Demand service, Oracle University, and Oracle's technology demo systems also run Oracle Linux.
Software developers at Oracle develop Oracle Database, Fusion Middleware, E-Business Suite and other components of Oracle Applications on Oracle Linux.
Related products
Oracle Linux is used as the underlying operating system for the following appliances.
Oracle Exadata
Oracle Private Cloud Appliance
Oracle Big Data Appliance
Oracle Exalytics
Oracle Database Appliance
Specific additions
Ksplice – Oracle acquired Ksplice Inc'' in 2011, and offers Oracle Linux users Ksplice to enable hot kernel patching
DTrace – , Oracle has begun porting DTrace from Solaris as a Linux kernel module
Oracle Clusterware – OS-level high availability technology used by Oracle RAC
Oracle Enterprise Manager – freely available to users with Oracle Linux support subscriptions to manage, monitor, and provision Oracle Linux.
BTRFS
Benchmark submissions
Sun Fire systems
In March 2012, Oracle submitted a TPC-C benchmark result using an x86 Sun Fire server running Oracle Linux and Unbreakable Enterprise Kernel. With 8 Intel Xeon processors running Oracle DB 11 R2, the system was benchmarkeded at handling over 5.06 million tpmC (New-Order transactions per minute while fulfilling TPC-C). The server was rated at the time as the third-fastest TPC-C non-clustered system and the fastest x86-64 non-clustered system.
Oracle also submitted a SPECjEnterprise2010 benchmark record using Oracle Linux and Oracle WebLogic Server, and achieved both a single node and an x86 world record result of 27,150 EjOPS (SPECjEnterprise Operation/second).
Cisco UCS systems
Cisco submitted 2 TPC-C benchmark results that run Oracle Linux with the Unbreakable Enterprise Kernel R2 on UCS systems. The UCS systems rank fourth and eighth on the top TPC-C non-clustered list.
SPARC version
In December 2010, Oracle CEO Larry Ellison, in response to a question on Oracle's Linux strategy, said that at some point in the future Oracle Linux would run on Oracle's SPARC platforms. At Oracle OpenWorld 2014 John Fowler, Oracle's Executive Vice President for Systems, also said that Linux will be able to run on SPARC at some point.
In October 2015, Oracle released a Linux reference platform for SPARC systems based on Red Hat Enterprise Linux 6.
In September 2016, Oracle released information about an upcoming product, Oracle Exadata SL6-2, a database server using SPARC processors running Linux.
On 31 March 2017, Oracle posted the first public release of Oracle Linux for SPARC, installable on SPARC T4, T5, M5, and M7 processors. The release notes state that the release is being made available "for the benefit of developers and partners", but is only supported on Exadata SL6 hardware.
Software updates and version history
In March 2012, Oracle announced free software updates and errata for Oracle Linux on Oracle's public yum repositories. In September 2013, Oracle announced that each month its free public yum servers handle 80 TB of data, and the switch to the Akamai content delivery network to handle the traffic growth.
Release history
Oracle Linux 8, 8.1, 8.2, 8.3, 8.4
Oracle Linux 7, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9
Oracle Linux 6, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10
Oracle Linux 5, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 5.10, 5.11
Oracle Enterprise Linux 4.4, 4.5, 4.6, 4.7, 4.8, 4.9
Oracle Linux uses a version-naming convention identical to that of Red Hat Enterprise Linux (e.g. the first version, Oracle Linux 4.5, is based on RHEL 4.5).
Oracle OpenStack for Oracle Linux
Oracle announced on 24 September 2014 Oracle OpenStack for Oracle Linux distribution which allows users to control Oracle Linux and Oracle VM through OpenStack in production environments. Based on the OpenStack Icehouse release, Oracle OpenStack for Oracle Linux distribution is a cloud management software product that provides an enterprise type solution to deploy and manage the IT environment. The product maintains the flexibility of OpenStack, allowing users to deploy different configurations, and to integrate with different software and hardware vendors. Oracle OpenStack for Oracle Linux is available for free download. There is no licensing cost. It can be downloaded for free from the Oracle web page. Supported OpenStack Services in Version 1 includes Nova, Keystone, Cinder, Glance, Neutron, Horizon and Swift. According to Oracle the support for Oracle OpenStack for Oracle Linux is included as part of Oracle Premier Support for Oracle Linux, Oracle VM, and Systems.
See also
Oracle Solaris
Red Hat Enterprise Linux derivatives
List of commercial products based on Red Hat Enterprise Linux
References
External links
Enterprise Linux distributions
Oracle software
RPM-based Linux distributions
X86-64 Linux distributions
Linux distributions |
18988345 | https://en.wikipedia.org/wiki/NEi%20Software | NEi Software | NEi Software, founded as Noran Engineering, Inc. in 1991, is an engineering software company that develops, publishes and promotes FEA (finite element analysis) software programs including its flagship product NEi Nastran. The FEA algorithms allow engineers to analyze how a structure will behave under a variety of conditions. The types of analysis include linear and nonlinear stress, dynamic, and heat transfer analysis. MCT, PPFA (progressive ply failure analysis), dynamic design analysis method, optimization, fatigue, CFD and event simulation are just some of the specialized types of analysis supported by the company.
NEi Software is used by engineers primarily in the aerospace, automobile, maritime, and offshore industries. The software is intended to save costs by reducing time to market; testing for function and safety; reducing the need for physical prototypes; and minimizing materials, weight and size of structures. After designers create an FEA model, analysts check for potential points of stress and buckling. Customers include racing yacht builder Farr Yacht Design, and SpaceShipTwo builder Scaled Composites. Other projects using NEi Software include the Swift KillerBee unmanned air vehicle, Southern Astrophysical Research Telescope (SOAR), James Webb Space Telescope, Red Bull Racing's Minardi Formula One car, and the NuLens Ltd. Accommodative Intraocular lens eye implant.
References
External links
NEi Software
Femap by Siemens.com
NEi Nastran in Turkey
Computer-aided engineering software
Privately held companies based in California
Companies based in Westminster, California |
30491401 | https://en.wikipedia.org/wiki/1942%20USC%20Trojans%20football%20team | 1942 USC Trojans football team | The 1942 USC Trojans football team represented the University of Southern California (USC) in the 1942 college football season. In their first year under head coach Jeff Cravath, the Trojans compiled a 5–5–1 record (4–2–1 against conference opponents), finished in fourth place in the Pacific Coast Conference, and were outscored by their opponents by a combined total of 184 to 128.
Schedule
References
USC
USC Trojans football seasons
USC Trojans football |
40683421 | https://en.wikipedia.org/wiki/Darren%20Kimura | Darren Kimura | Darren T. Kimura (born September 10, 1974, Hilo, Hawaii) is an American businessman, inventor, and investor. He is best known as the inventor of Micro Concentrated solar power (CSP) technology otherwise known as MicroCSP.
Life
Kimura was born to Japanese American parents in Hilo, Hawaii and graduated from Waiakea High School. He achieved the rank of Eagle Scout as a member in the Boy Scouts of America. He studied Computer Science and Business Management at the University of Hawaiʻi at Mānoa. He later attended Portland State University, studying Electrical Engineering and Stanford University studying Computer Science. During the 2009 flu pandemic, Newsweek covered Darren's challenges in finding Tamiflu for his wife Kelly due to hoarding.
Career
Kimura began his career as an entrepreneur bringing the sport of paintball to Hawaii while still in high school. When attending the University of Hawaii at Manoa he worked in the Information and Computer Sciences department, where he launched Nalu Communications, an internet service provider. Later he created and expanded Energy Industries Corporation. With the support of Energy Industries he created Energy Laboratories, as an incubator for start-up companies, and Enerdigm Ventures, a venture capital firm to seed and support early to growth stage companies. MicroCSP Technology and Sopogy, Inc. began as a spin-off of at Energy Laboratories and was initially funded by Kimura and Enerdigm Ventures. Kimura and Enerdigm Ventures invested in the creation of LiveAction as a spin-out from Referentia a government contractor. Kimura and Enerdigm Ventures invested in the creation of Spin Technology a Data backup product for Google Workspace or Microsoft 365.
He also supported the construction of the world's first MicroCSP project, called Holaniku at Keahole Point. He served as a director to the State of Hawaii Venture Capital fund. His entrepreneurial accomplishments has led him to be featured on the cover of MidWeek, on the cover of Pacific Edge, in Entrepreneur, and in the 2007 book The Greater Good: Life Lessons from Hawaii's Leaders. He was also a live guest with Al Gore on The Climate Reality Project on 24 Hours of Reality – The Dirty Weather Report. He was featured in the Hawaiian Electric Company's clean energy promotion, which was highlighted during the APEC United States 2011 and is an author at Cisco blogs.
Energy Industries Corporation
Kimura started Energy Conservation Hawaii in 1994, from the back of his SUV, using his surfboard as his desk. In a few years the company reached $50 million in revenues and began national expansion. Kimura changed the company name to Energy Industries Corporation to appeal to its national markets, and he remains the largest shareholder. The underlying concept for creating Energy Industries Corporation was to help make energy efficiency simple. In his work at Energy Industries Corporation, Kimura provided Energy Star consulting services in such locations as Hawaii, Palau, Guam and Saipan. In 2008, Energy Industries Corporation was featured in The Wall Street Journal article "Alternative State", about renewable energy projects created in Hawaii.
MicroCSP
The concept for MicroCSP technologies were created when Kimura attempted to install a conventional Concentrating Solar Power trough in Kona, Hawaii. Realizing that it was uneconomical and not practical to ship, install and operate such large components in remote locations like Hawaii, Kimura worked on reducing the dimensions of the solar collector which led to reconfiguring the technology and incorporating the use of state of the art materials. MicroCSP is used for community-sized power plants (1 MW to 50 MW), for industrial, agricultural and manufacturing 'process heat' applications, and when large amounts of hot water are needed, such as resort swimming pools, water parks, large laundry facilities, sterilization, distillation and other such uses. MIT also studied the use of MicroCSP technology in power generation using the Organic Rankine Cycle. Kimura trademarked the term MicroCSP and later released the term for use in the public domain to help accelerate MicroCSP adoption. His US Patent on the idea () served as the basis for other MicroCSP inventions. Companies producing MicroCSP technologies include Rackam,</ref> Aora Sun2Power Chromasun SolarLite
NEP Solar Novatec Solar Industrial Solar Focal Point Energy SunTrough Focused Sun Heat 2 Power and Nanogen and Tamuz Energy.
Sopogy
Solar Power Technology company or "Sopogy", was a solar thermal technology supplier, was founded in 2002 at the Honolulu, Hawaii–based clean technology incubator known as Energy Laboratories. The company began its research on concentrating solar thermal energy to produce solar steam and thermal heat for absorption chillers or industrial process heat. The company has also developed applications that incorporate its solar collectors to generate electricity and desalination. Kimura created the company name from taking sections of key words including "SO" from Solar, "PO" from "Power", and "GY" from "Energy and Technology". The company's OEM and IPP sales teams are located in Honolulu along with its research and development, and in 2006 it expanded its manufacturing, C&I, and oil and gas sales teams in its Silicon Valley facility. Sopogy has installed 200 megawatts in China, and 360 megawatts in Thailand. Kimura and Sopogy, along with First Solar, were featured in the Whole Foods Market documentary Thrive. In 2011 Sopogy was honored with the APEC Business Innovation Award, and was featured on the cover of the Los Angeles Times.
Sopogy completed a Series E preferred financing in October 2012 led by Mitsui & Co. and SunEdison, a U.S. solar company, Sempra Energy, 3M, and others. The company announced that Darren Kimura had stepped down as the chairman, chief executive officer and president in March 2013, and SunEdison installed one of its executives as president of the company. After completing a hand-over period, Darren Kimura left the company in May 2013. Sopogy was acquired by Hitachi Power Systems in 2014.
LiveAction
Enerdigm Ventures invested in the founding of LiveAction, Inc., an enterprise network management software company which was originally a government project funded in 2007 at Referentia Systems, Inc. and Kimura joined the company as Chairman of the Board of Directors, Executive Chairman and later as Chief Executive Officer. The Company is best known for its NetFlow visualization capabilities and quality of service monitoring and configuration capabilities. Kimura led LiveAction to achieve a place on the Cisco Solutions Plus Program and completed LiveAction's Series A financing that included participation by Enerdigm Ventures, AITV and Cisco Systems LiveAction expanded its capabilities in scale to achieve an industry leading 1 million flows per second and began expanding as a network visualization platform including capabilities in Software Defined Networking and SD-WAN. Under Kimura, LiveAction launched LiveAgent to extend visibility to the network end point and LiveAction completed a $36 million Series B financing led by Insight Venture Partners, Cisco Systems and AITV in early 2016. In December 2017, Kimura led the acquisition of LivingObjects' Service Provider network monitoring platform and to the creation of the LiveSP business unit at LiveAction. In June 2018, Kimura led the acquisition of Savvius formerly known as WildPackets famous for their OmniPeek protocol analysis software. Kimura retired from LiveAction business operations in 2019.
Spin Technology
In 2020, Enerdigm Ventures invested in Spin Technology, Inc., formerly known as SpinBackup a data backup product for G-Suite or Office 365 and Kimura joined the company as Chairman of the Board of Directors and Executive Chairman. Also referred to as Spin.ai the technology provides API based data protection for businesses critical SaaS cloud based data in G Suite and Office 365 environments, Application and Extension risk assessments for G-Suite and Cyber Liability Insurance
. Spin.ai has a 4.9 rating out of 5 in the G Suite marketplace
Awards
Kimura received the 2002 SBA Young Entrepreneur of the Year award.
He was honored as the 2007 Green Entrepreneur,
and received the Blue Planet Foundation award in 2009. He also received the Hawaii Venture Capital Association Deal of the Year award in 2012. He was named to the 2010 Hawaii Business Magazines "10 for Today", along with professional baseball player Shane Victorino, and founder of eBay, Pierre Omidyar, and was named to the Pacific Business Newss "10 to Watch in 2013", along with video game developer and video game Tetris distributor Henk Rogers.
Non-profit activities
Kimura is active in community and philanthropic activities with a strong focus on Hawaii. He serves as vice president and director at Blue Planet Foundation, and vice president and director at SEE-IT (Science Engineering Exposition of Innovative Technologies), He is also on the board of directors at PBS Hawaii, is entrepreneur in residence at Punahou School, is on the board of directors at Enterprise Honolulu, the Oahu Economic Development Board, and is on the Dean's Council of University of Hawaii at Manoa's College of Engineering. Kimura sponsors the Hawaii Island Science Fair "Kimura Award for Innovations in Clean Energy" which was awarded to Felix Peng (2016) of Waiakea High School, Ben Kubo (2017) of Parker School, Cesar Rivera (2017) of St Joseph School of Hilo. Also in 2017, Kimura expanded the Hawaii Island Science Fair award to include the "Kimura Award for Innovations in Computer Science" which was awarded to Ara Uhr of Hilo High School.
See also
List of people associated with renewable energy
List of solar thermal power stations
SolarPACES
Solar thermal collector
Intersolar
References
External links
Energy Industries Corporation
Sopogy
Enerdigm Group
Darren Kimura Twitter Page
1974 births
Living people
American technology chief executives
American technology company founders
People from Hilo, Hawaii
Private equity and venture capital investors
Portland State University alumni
Sustainability advocates
People associated with renewable energy
University of Hawaiʻi alumni
People associated with solar power
Silicon Valley people
American businesspeople of Japanese descent
Businesspeople from Honolulu
American venture capitalists
Angel investors
Hawaii people of Japanese descent |
34857133 | https://en.wikipedia.org/wiki/Integromics | Integromics | Integromics is a global bioinformatics company headquartered in Granada, Spain, with a second office in Madrid, subsidiaries in the United States and United Kingdom, and distributors in 10 countries. Integromics S.L. provides bioinformatics software for data management and data analysis in genomics and proteomics. The company provides a line of products that serve the gene expression, sequencing, and proteomics markets. Customers include genomic research centers, pharmaceutical companies, academic institutions, clinical research organizations, and biotechnology companies.
Integromics was acquired by PerkinElmer in 2017.
Partners
Integromics operates in the global life science market and has an established network of collaborations with international technology providers such as Applied Biosystems, Ingenuity, Spotfire, pharmaceutical companies, and academic institutions.
Integromics has key scientific collaborations with main research institutions and companies.
RESOLVE. Resolve chronic inflammation and achieve healthy ageing by understanding non-regenerative repair
LIPIDOMIC NET. Lipid droplets as dynamic organelles of the fat deposition and release: translational research towards human disease. It is managed within the EU FP7, in close collaboration with LIPID MAPS and Lipid Bank.
PROACTIVE. High throughput proteomics systems for accelerated profiling of putative plasma biomarkers.
ProteomeXchange. Coordination action to establish proteomics standards. Coordinated by the European Bioinformatics Institute
IRIS. Integrated computational environment for high throughput RNA Interference Screening. Coordinated by Integromics.
Awards and recognition
2007 - Frost & Sullivan Product Innovation of the Year Award
2007 - Emprendedor XXI Innovation Award
2010 - Accésit of Premio Sello Innovación Award
2011 - "Best Trajectory of a Technology-Based Innovative Enterprise (EIBT) 2011" Award
2012 - Award of the Tech Media Europe 2012 & ICT Finance MarketPlace
History
2002 Integromics founded. Integromics was founded as a spin-off of the National Center for Biotechnology (CNB / CSIC) in Spain and the University of Malaga. Principal founder Dr Jose Maria Carazo was motivated by a clear market need to develop new computational methods for analyzing data, with the company’s first product addressing the needs of the microarray data analysis market.
2007 Integromics partners with Applied Biosystems
2007 Integromics Inc. Establishes US Office at Philadelphia Science Center
2008 Integromics partners with TIBCO Spotfire to develop Integromics Biomarker Discovery
2009 Integromics partners with Ingenuity to offer integration for Comprehensive Genomics Analysis
2009 Integromics forms part of the PROACTIVE consortium to develop a unique high throughput plasma biomarker research platform
2009 Integromics releases its first proteomics product
2009 Integromics received a venture investment of 1M€ from I + D Unifondo.
2010 Integromics and TATAA Biocenter collaborate to offer comprehensive qPCR data analysis
2010 Integromics releases its first Next Generation Sequencing Analysis product
2010 Integromics´ publication in Nature describes new class of gene-termini-associated human RNAs suggests a novel RNA copying mechanism, achieved using Integromics SeqSolve™ Next Generation Sequencing software
2011 Integromics and Ingenuity expand their co-operation with the integration of a fourth Integromics product to Ingenuity's IPA
2011 Integromics launches OmicsHub Proteomics 2.0., a data management and analysis tool for mass spectrometry laboratories and core facilities
2011 Integromics´publication in Mol Cell Proteomics describes Multiplexed homogeneous proximity ligation assays for high-throughput protein biomarker research in serological material.
2011 Integromics´ publication in Cell describes novel polyadenylation genome-wide profiling, achieved using Integromics SeqSolve™ Next Generation Sequencing software
2012 Integromics partners with FPGMX to develop low-cost methods for clinical genomics
2012 Tibco Spotfire Certifies Integromics as its Sole Partner in the Fields of Genomics, Proteomics and Bioinformatics
2012 Integromics Launches OmicsOffice Platform, a total solution that provides a streamlined and common analysis environment to analyse results from different genomics technologies and the analytical tools to compare and achieve a higher level of results using these combinations
2013 Integromics Partners with PerkinElmer for the Exclusive Worldwide Distributorship of New OmicsOffice Genomics Software From Integromics
2013 Integromics partners with the Celgene Institute for Translational Research Europe (CITRE) and the Centre of Studies and Technical Research (CEIT) to implement the SANSCRIPT project
2013 Integromics Establishes a Key Collaboration with European HPC Experts to Develop New Big-data Computing Solutions for Genomics
Products and services
SeqSolve
SeqSolve is software for the tertiary analysis of Next Generation Sequencing (NGS) data.
RealTime StatMiner
RealTime StatMiner is a Step-by-Step Guide for RT-qPCR data analysis. RealTime StatMiner is available as a standalone as well as a TIBCO Spotfire compatible application. Co-developed with Applied Biosystems.
Integromics Biomarker Discovery
Integromics Biomarker Discovery (IBD) for microarray gene expression data analysis guides the user throughout a step-by-step workflow.
OmicsHub Proteomics
OmicsHub® Proteomics is a platform for the central management and analysis of data in proteomics labs.
See also
List of bioinformatics companies
Bioinformatics
Computational Biology
Microarray analysis
DNA Microarray
Pathway Analysis
Proteomics
Gene expression
DNA sequencing
References
Bioinformatics software
Genomics companies
Research support companies
Privately held companies of Spain
Companies of Andalusia
Biotechnology companies established in 2002
2002 establishments in Spain
Biotechnology companies of Spain |
31279203 | https://en.wikipedia.org/wiki/Advanced%20Network%20and%20Services | Advanced Network and Services | Advanced Network and Services, Inc. (ANS) was a United States non-profit organization formed in September, 1990 by the NSFNET partners (Merit Network, IBM, and MCI) to run the network infrastructure for the soon to be upgraded NSFNET Backbone Service. ANS was incorporated in the State of New York and had offices in Armonk and Poughkeepsie, New York.
History
ANSNet
In anticipation of the NSFNET Digital Signal 3 (T3) upgrade and the approaching end of the 5-year NSFNET cooperative agreement, in September 1990 Merit, IBM, and MCI formed Advanced Network and Services (ANS), a new non-profit corporation with a more broadly based Board of Directors than the Michigan-based Merit Network. Under its cooperative agreement with US National Science Foundation (NSF), Merit remained ultimately responsible for the operation of NSFNET, but subcontracted much of the engineering and operations work to ANS. Both IBM and MCI made substantial new financial and other commitments to help support the new venture. Allan Weis left IBM to become ANS's first President and Managing Director. Douglas Van Houweling, former Chair of the Merit Network Board and Vice Provost for Information Technology at the University of Michigan, was the first Chairman of the ANS Board of Directors.
Completed in November 1991, the new T3 backbone was named ANSNet and provided the physical infrastructure used by Merit to deliver the NSFNET Backbone Service.
ANS CO+RE
In May, 1991 a new ISP, ANS CO+RE (commercial plus research), was created as a for-profit subsidiary of the non-profit Advanced Network and Services. ANS CO+RE was created specifically to allow commercial traffic on ANSNet without jeopardizing its parent's non-profit status or violating any tax laws.
The NSFNET Backbone Service and ANS CO+RE both used and shared the common ANSNet infrastructure. NSF agreed to allow ANS CO+RE to carry commercial traffic subject to several conditions:
that the NSFNET Backbone Service was not diminished;
that ANS CO+RE recovered at least the average cost of the commercial traffic traversing the network; and
that any excess revenues recovered above the cost of carrying the commercial traffic would be placed into an infrastructure pool to be distributed by an allocation committee broadly representative of the networking community to enhance and extend national and regional networking infrastructure and support.
In 1992, ANS worked to address security concerns by potential customers caused by recent security incidents (e.g. morris worm) and opened an office in Northern Virginia for their security product team. The security team created one of the first Internet firewalls called ANS InterLock. The InterLock was arguably the first proxy-based Internet firewall product (other firewalls at the time were router-based ACLs or part of a service offering). The InterLock was modifications to IBM's AIX and later Sun's Solaris operating system. InterLock's popularity at the time of the boom of the WWW was responsible for the infamous proxy settings in the Mosaic browser, so users could access the Internet transparently thru their L7 inspection proxy for HTTP 1.0.
ANS and in particular ANS CO+RE were involved in the controversies over who and how commercial traffic should be carried over what had, until recently, been a government sponsored networking infrastructure. These controversies are discussed in the "Commercial ISPs, ANS CO+RE, and the CIX" and "Controversy" sections of the NSFNET article.
Sale of networking business to AOL
In 1995, there was a transition to a new Internet architecture and the NSFNET Backbone Service was decommissioned. At this point, ANS sold its networking business to AOL for $35M. The networking business would become a new AOL subsidiary company known as ANS Communications, Inc. Although now two separate entities, both the for-profit and non-profit ANS organizations shared the same pre-sale history.
A new life as a philanthropic organization
With over $35M from its sale of its networking business, ANS became a philanthropic organization with a mission to advance education by accelerating the use of computer network applications and technology". This work helped create ThinkQuest, the National Tele-Immersion Initiative, the IP Performance Metrics program, and provided grant funding for educational programs including TRIO Upward Bound, the Internet Society, Internet2, Computers for Youth, Year Up, National Foundation for Teaching Entrepreneurship, Sarasota TeXcellence Program, and many others.
One of their philanthropic ventures was to sponsor competitions in science and math, arts and literature, social sciences and even sports. They awarded over $1M in prizes in contests with the goal to use the Web for educational projects with widespread or popular applications.
ANS closes
ANS closed down its operations in the mid 2015.
See also
History of the Internet
Commercial Internet eXchange (CIX)
References
History of the Internet
Internet service providers of the United States
Electronics companies established in 1990
Telecommunications companies established in 1990
Companies disestablished in 2008
1990 establishments in the United States
2009 disestablishments in the United States
Organizations established in 1990
Organizations disestablished in 2008 |
54327167 | https://en.wikipedia.org/wiki/Rajkiya%20Engineering%20College%2C%20Ambedkar%20Nagar | Rajkiya Engineering College, Ambedkar Nagar | Rajkiya Engineering College, Ambedkar Nagar is a government engineering college located in Akbarpur, Ambedkar Nagar district, Uttar Pradesh, India. It is an affiliated college of Dr. A.P.J. Abdul Kalam Technical University. Rajkiya Engineering College (R.E.C.) Ambedkar Nagar was established by Government of Uttar Pradesh under special component plan in year 2010, the college has started offering B.Tech Programme in three disciplines – Information Technology (IT), Electrical Engineering (EE) and Civil Engineering (CE) with intake of 60 seats in each branches from the session 2010–11.
History
Rajkiya Engineering College, Ambedkar Nagar was established in October 2010 and originally operated from the campus of Kamla Nehru Institute of Technology (KNIT) in Sultanpur. It moved to its own campus in Ambedkar Nagar in 2012.
The director of KNIT served as the institutes principal until 2015, when K. S. Verma has joined as a regular director.
Academics
The Institute offers 4-year, 8-semester B.Tech courses. The B.Tech curriculum prescribed by Abdul Kalam Technical University Lucknow consists of theory courses, practicals, projects and seminars. It also provides weightage for industrial training and extracurricular activities.
Departments
The Institute has following Academic Departments
Department of Applied Sciences & Humanities
Department of Civil Engineering
Department of Electrical Engineering
Department of Information Technology
Workshop
Central Library
Admissions
The institute admits students for its Bachelor in Technology (B.Tech) programme from all over India through the competitive Common Entrance Test SEE-UPTU.
Campus facilities and amenities
REC has different blocks allotted for various academic activities which include co-curricular and extra co-curricular activities. Apart from hostels, REC has admin block, a SAC (Student Activity Center), REC playing courts, Academic block, various engineering labs like Mechanics lab and Workshops which consist of various shops like welding, foundry, carpentry, etc. that are generally included in engineering syllabuses across the country. Many of the labs like networking, Java and various Information Technology Department labs are located in academic blocks. The college consists of a number of canteens for students and faculties.
Events
Annual college festival - AVIGHNA
AVIGHNA provides opportunity to the budding talent in the institute to showcase their talent to a larger audience. At Avighna we have various events by Sports, Cultural, Technical, FineArt & Literary Councils of the college. It provides the students to break off from their routine and explore their creative streaks in a variety of events.
National Sports Festival - KSHITIZ
KSHITIZ is the national level sports fest which promotes a spirit of friendly competition among the students of various institutions all around the country. KSHITIZ involves students from all over India competing in the university's sports facilities. The festival includes various sports events like cricket, badminton, basketball, football, handball, athletics, carom, chess, volleyball and many other events.
See also
List of colleges affiliated with Dr. A.P.J. Abdul Kalam Technical University
References
External links
Engineering colleges in Uttar Pradesh
Colleges in Ambedkar Nagar district
Educational institutions established in 2010
Dr. A.P.J. Abdul Kalam Technical University
Akbarpur, Ambedkar Nagar
2010 establishments in Uttar Pradesh |
7564083 | https://en.wikipedia.org/wiki/John%20Lansdown | John Lansdown | Robert John Lansdown (2 January 1929 – 17 February 1999) was a British computer graphics pioneer, polymath and Professor Emeritus at Middlesex University Lansdown Centre for Electronic Arts, which was renamed in his honour in 2000.
Lansdown was born in Cardiff. As early as 1960, when he was a successful architect with offices in Russell Square, central London, Lansdown was a believer in the potential for computers for architecture and other creative activities. He pioneered the use of computers as an aid to planning; making perspective drawings on an Elliott 803 computer in 1963, modelling a building's lifts and services, plotting the annual fall of daylight across its site, as well as authoring his own computer aided design applications.
Lansdown joined the ACM in 1972 and Eurographics in 1983. From the early 1970s to the 1990s, he had influential roles in several professional bodies, and chaired the Science Research Council's Computer Aided Building Design Panel, through which he implemented a world leading strategy for developing computer aided architectural design in British universities. He had enormous influence as one of the founders and as secretary of the Computer Arts Society (1968–1991). He was on 10 editorial boards and chaired and organised many international conferences – Event One at the Royal College of Art (1969) and Interact at the Edinburgh Festival (1973) were seminal events in establishing the use of computers for the creation of art works.
In 1977, Lansdown became chairman of System Simulation Ltd the software company which, amongst other pioneering activities, had played a key role in the creation and development of the Computer Arts Society. System Simulation had been applying computer graphics techniques in TV and film applications following collaborative research work at the Royal College of Art. At System Simulation Lansdown then played a leading role in several pioneering animation projects, contributing to the flight deck instrumentation readouts on the Nostromo space ship for Ridley Scott's Alien, many advertising sequences and latterly, working with Tony Pritchett, producing the 3D wireframe drawings from which Martin Lambie-Nairn's original Channel 4 logo was rendered.
Lansdown left the architectural practice in 1982 and split his time between System Simulation and a Senior Research Fellowship at the Royal College of Art before becoming a full-time academic in 1988 as Professor and head of the Centre for Computer Aided Art & Design at Middlesex University, then as Dean of the Department of Art, Design and Performing Arts then, finally, as Pro Vice-Chancellor of the University. He was also Senior Visiting Fellow at the Department of Architectural Science, University of Sydney from 1983. He relinquished these roles on formal retirement in 1995, but continued to be very active and influential as Emeritus Professor in the Centre for Electronic Arts. He continued to advise System Simulation and to work on the development of a digital archive of the Computer Arts Society's history and holdings which the company had initiated. This was ended by his death but has since been brought up-to-date by the CACHe project in the School of Art History at Birkbeck, University of London.
Lansdown's range of publications began to diversify from the early 1970s. He wrote the classic Teach Yourself Computer Graphics (Hodder and Stoughton, 1987), exhibited algorithmically generated images, animations, compositions, conversations, sword fights and choreography, such as the 18-minute dance piece A/C/S/H/O commissioned by the One Extra Dance Company and performed at the Sydney Opera House in 1990. He contributed as author and/or editor to 34 books and worked on more than a hundred conference and journal publications.
Lansdown married Dorothy (Dot) in 1952, and they had two children, Robert and Karen. All survive him.
References
Bibliography
Catherine Mason, a computer in the art room: the origins of british computer art 1950–1980. JJG Publishing, 2008. .
Paul Brown, Charlie Gere, Nicholas Lambert, and Catherine Mason (editors), White Heat Cold Logic: British Computer Art 1960–1980. The MIT Press, Leonardo Book Series, 2008. .
Charlie Gere, 'Minicomputer Experimentalism in the United Kingdom from the 1950s to 1980' in Hannah Higgins, & Douglas Kahn (Eds.), Mainframe experimentalism: Early digital computing in the experimental arts. Berkeley, CA: University of California Press (2012), pp. 114–116
External links
UNESCO page
SIGGRAPH biography
1929 births
1999 deaths
Scientists from Cardiff
Computer graphics professionals
Welsh computer scientists
Welsh computer programmers
Academics of Middlesex University
Academics of the Royal College of Art |
19076825 | https://en.wikipedia.org/wiki/TeamViewer | TeamViewer | TeamViewer is a remote access and remote control computer software, allowing maintenance of computers and other devices. It was first released in 2005, and its functionality has expanded step by step. TeamViewer is proprietary software, but does not require registration and is free of charge for non-commercial use. It has been installed on more than two billion devices. TeamViewer is the core product of developer TeamViewer AG.
History
Rossmanith GmbH released the first version of TeamViewer software in 2005, at that time still based on the VNC project. The IT service provider wanted to avoid unnecessary trips to customers and perform tasks such as installing software remotely. The development was very successful and gave rise to TeamViewer GmbH, which today operates as TeamViewer Germany GmbH and is part of TeamViewer AG.
Operating systems
TeamViewer is available for all desktop computers with common operating systems, including Microsoft Windows and Windows Server, as well as Apple's macOS. There are also packages for several Linux distributions and derivatives, for example, Debian, Ubuntu, Red Hat, and Fedora Linux. Besides, there is Raspberry Pi OS, a Debian variant for the Raspberry Pi.
TeamViewer is also available for smartphones and tablets running Android or Apple's iOS/iPadOS operating system, very limited functionality on Linux based operating systems. Support for Windows Phone and Windows Mobile has been phased out after Microsoft discontinued support for the two operating systems.
Functionality
The functionality of TeamViewer differs depending on the device and variant or version of the software. The core of TeamViewer is remote access to computers and other endpoints as well as their control and maintenance. After the connection is established, the remote screen is visible to the user at the other endpoint. Both endpoints can send and receive files as well as access a shared clipboard, for example. Besides, some functions facilitate team collaboration, such as audio and video transmissions via IP telephony.
In recent years, the functionality of the software has been optimized in particular for use in large companies. For this purpose, the enterprise variant TeamViewer Tensor was developed. With TeamViewer Pilot, TeamViewer sells software for remote support with augmented reality elements. TeamViewer offers interfaces to other applications and services, for example from Microsoft (Teams), Salesforce, and ServiceNow. The solution is available in nearly all countries and supports over 30 languages.
License policy
Private users who use TeamViewer for non-commercial purposes may use the software free of charge. Fees must be paid for the commercial use of the software. Companies and other commercial customers must sign up for a subscription. A one-time purchase of the application is no longer possible since the switch from a license to a subscription model. The prices for using the software are scaled according to the number of users as well as the number of concurrent sessions. Updates are released monthly and are included for all users.
Security
Incoming and outgoing connections are equally possible via the Internet or local networks. If desired, TeamViewer can run as a Windows system service, which allows unattended access via TeamViewer. There is also a portable version of the software that runs completely without installation, for example via a USB data carrier.
The connection is established using automatically generated unique IDs and passwords. Before each connection, the TeamViewer network servers check the validity of the IDs of both endpoints. Security is enhanced by the fingerprint, which allows users to provide additional proof of the remote device's identity. Passwords are protected against brute force attacks, especially by increasing the waiting time between connection attempts exponentially. TeamViewer provides additional security features, such as two-factor authentication, block and allow lists.
Before establishing a connection, TeamViewer first checks the configuration of the device and the network to detect restrictions imposed by firewalls and other security systems. Usually, a direct TCP/UDP connection can be established so that no additional ports need to be opened. Otherwise, TeamViewer falls back on other paths such as an HTTP tunnel.
Regardless of the connection type selected, data is transferred exclusively via secure data channels. TeamViewer includes end-to-end encryption based on RSA (4096 bits) and AES (256 bits). According to the manufacturer, man-in-the-middle attacks are principally not possible. This is to be guaranteed by the signed key exchange of two key pairs.
Abuse
Support scam
TeamViewer and similar programs can be abused for technical support scams. In this process, attackers pretend to be employees of well-known companies to gain control over their victims' computers. They then use a pretext to obtain money from their victims. For this reason, the British Internet provider TalkTalk temporarily blocked the software's data traffic. TeamViewer condemns all forms of misuse of the software, provides tips for safe use, and provides a way to investigate corresponding incidents.
Account access
In June 2016, hundreds of TeamViewer users reported having their computers accessed by an unauthorized address in China and bank accounts misappropriated. TeamViewer attributed the outcome to user's "careless password use" and denied all responsibility, saying "neither was TeamViewer hacked nor is there a security hole, TeamViewer is safe to use and has proper security measures in place. Our evidence points to careless use as the cause of the reported issue, a few extra steps will prevent potential abuse."
See also
Comparison of remote desktop software
Remote desktop software
References
External links
Linux remote administration software
MacOS remote administration software
Portable software
Proprietary cross-platform software
Proprietary software that uses Qt
Remote administration software
Remote desktop
Software that uses Qt
Universal Windows Platform apps
Virtual Network Computing
Web conferencing
Windows remote administration software
Software companies of Germany
Software companies established in 2005 |
41978266 | https://en.wikipedia.org/wiki/Victoria%20Crowe | Victoria Crowe | Victoria Elizabeth Crowe OBE, DHC, FRSE, MA (RCA) RSA, RSW (born 1945) is a Scottish artist known for her portrait and landscape paintings. She has works in several collections including the National Galleries of Scotland, the National Portrait Gallery, London, and the Royal Scottish Academy.
Life
Victoria Crowe was born in Kingston-on-Thames on 8 May 1945 and educated at Ursuline Convent Grammar School, London. She studied at Kingston College of Art from 1961-5, before undertaking further study at the Royal College of Art in London from 1965 to 1968. On the strength of her postgraduate exhibition, she was invited to teach at Edinburgh College of Art by Robin Philipson, Head of Drawing and Painting. She worked at ECA for the next thirty years as a part-time lecturer in Drawing and Painting, while also developing her own artistic practice.
She and her husband Michael Walton settled at Kittleyknowe near Carlops in the Pentland Hills, Scotland, where they befriended the shepherdess Jenny Armstrong. In 1973 she had a son and in 1976 a daughter. Her son passed away in 1995. They set up a trust in his name to raise awareness and funds to tackle oral cancers in young people.
Work
She began painting formal portraits in the early 1980s. She has produced many individual portraits, including RD Laing, Kathleen Raine, Tam Dalyell, and Peter Higgs.
Her work includes the series A Shepherd's Life, painted between 1970 and 1985 and first shown at the Scottish National Portrait Gallery in Edinburgh in 2000, which portrays the life of Jenny Armstrong, an elderly shepherd from the Scottish Borders who was Crowe's neighbour at Kittleyknowe. One of the works in the series, Two Views, was converted into a tapestry by Dovecot Studios in Edinburgh, commissioned by Richard Scott, 10th Duke of Buccleuch.
Her first solo exhibition was held in 1983 at the Thackeray Gallery, London, where she would continue to exhibit regularly until 2007.
Between 1970 and 1985, Crowe undertook study trips to Russia, Denmark and Finland. She visited Italy in the early 1990s, which added the influence of Italian Renaissance art to her works, leading to a new phase of increased confidence and achievement. However, in 1994 her art was forced to respond to her son's diagnosis with cancer and then to his death in 1995, which resulted in a series of works expressing her grief, through repeated motifs such as the moon and flowers. Her works in the 21st century included wintry landscapes with skeletal hazel trees which Duncan Macmillan called "numinous pictures; they are spiritual landscapes".
In 2004 Crowe was appointed Senior Visiting Scholar at St Catherine's College, University of Cambridge. The work she produced during this period was shown at the exhibition Plant Memory at the Royal Scottish Academy, Edinburgh.
In 2017 Crowe designed The Leathersellers' Tapestry for the Dining Hall of the Leathersellers' Building in London. The forty metre-long tapestry was woven at Dovecot Tapestry Studios in Edinburgh.
Honours
2004: Officer of the Order of the British Empire (OBE)
2009: Awarded Doctor Honoris Causa (DHC), University of Aberdeen
2010: Fellow of the Royal Society of Edinburgh
Member of the Royal Scottish Academy
Selected exhibitions
1989: Bruton Gallery, Bath
1991: Thackeray Gallery, London
1993: Bruton Gallery, Bath
1993: Ancrum Gallery, Borders Festival
1994: Thackeray Gallery, London
2000: A Shepherd's Life, Scottish National Portrait Gallery
2007: Plant Memory, Royal Scottish Academy
2009: Fine Art Society, London
2013: Fleece to Fibre, Dovecot Studios, Edinburgh
2018: Victoria Crowe: Beyond Likeness, Scottish National Portrait Gallery
2019: Victoria Crowe: 50 Years of Painting, City Art Centre, Edinburgh
Collections
Crowe's work is held in a wide range of collections, including:
National Galleries of Scotland: Portraits including Callum Macdonald (1996), Graham Crowden (1996) and Winifred Rushforth (1982)
National Portrait Gallery, London: Portraits of Kathleen Raine (1986) and Dame Janet Vaughan (1986)
Royal Scottish Academy
National Trust for Scotland: Portrait of Lord Wemyss (1989)
St John's College, Oxford: Portrait of Professor Bill Hayes (1991)
National Museum of Scotland: Large Tree Group tapestry (2012), produced in collaboration with Dovecot Tapestry Studios, Edinburgh
Royal Society of Edinburgh: Portraits of Dame Jocelyn Bell Burnell (2016) and Professor Peter Higgs (2013)
St Catherine's College, Cambridge: Portrait of Professor David Ingram (2003)
Bibliography
Monographs
Crowe, Victoria and Walton, Michael, Victoria Crowe: Painted Insights, Antique Collectors Club, 2001
Macmillan, Duncan. Victoria Crowe. Antique Collectors' Club Ltd, 2012.
Mansfield, Susan, Macmillan, Duncan and Peploe, Guy. Victoria Crowe: 50 Years of Painting. Sansom & Co., 2019.
Further reading
Crowe, Victoria and Robertson, Naomi, Victoria Crowe: The Leathersellers' Tapestry, The Leathersellers' Company, 2017
Taubman, Mary, Lawson, Julie and Crowe, Victoria, A Shepherd's Life: Paintings of Jenny Armstrong by Victoria Crowe, Scottish National Portrait Gallery, 2018
Macmillan, Duncan and Crowe, Victoria, Victoria Crowe: Beyond Likeness, Scottish National Portrait Gallery, 2019
Mansfield, Susan and Spence, Alan, Catching the Light, The Scottish Gallery, 2019
References
External links
Her website
Catalogue for Victoria Crowe. 50 Years: Drawing and Thinking exhibition at The Scottish Gallery, Edinburgh
People from Kingston upon Thames
Officers of the Order of the British Empire
Fellows of the Royal Society of Edinburgh
Alumni of the Royal College of Art
1945 births
Living people |
41262343 | https://en.wikipedia.org/wiki/Clay%20Helton | Clay Helton | Clay Charles Helton (born June 24, 1972) is an American college football coach and former player, who is currently the head coach at Georgia Southern. He was previously the head coach of USC from 2015 to 2021. Helton has also been an assistant coach for Duke, Houston and Memphis. His father, Kim Helton, was a coach in college, the National Football League, and the Canadian Football League.
Early life
Helton was born on June 24, 1972, in Gainesville, Florida, where his father Kim Helton, was a graduate assistant for the Florida Gators football team. The Helton family later lived in the Miami, Tampa Bay, and Houston areas, as Kim Helton later coached for the University of Miami, Tampa Bay Buccaneers, and Houston Oilers. Clay Helton attended Clements High School in Sugar Land, Texas and graduated in 1990.
College playing career
After redshirting his freshman year, Helton played college football at Auburn as quarterback. In 1993, Helton transferred to Houston, after his father was hired as head coach there. Helton was a backup quarterback at both Auburn and Houston and graduated from Houston in 1994 with a degree in mathematics and interdisciplinary science. At Houston, Helton completed 47 of 87 passes for 420 yards, one touchdown, and four interceptions and played 16 games.
Coaching career
In 1995, Helton enrolled at Duke University and became a graduate assistant for the Duke Blue Devils football team under Fred Goldsmith. Helton later was promoted as running backs coach in 1996.
Helton joined his father at Houston to be running backs coach in 1997 and remained in that position until 1999, Kim Helton's final season as head coach.
After leaving Houston, Helton joined Rip Scherer's staff at Memphis also as running backs coach. Helton stayed on staff under new coach Tommy West, who replaced Scherer in 2001, and moved to coaching the wide receivers in 2003. By 2007, Helton was promoted to offensive coordinator and quarterbacks coach. Players Helton coached at Memphis include DeAngelo Williams, a first-round NFL draft pick in 2005, and 2006 Conference USA All-Freshman pick Duke Calhoun.
USC
Helton was hired by USC to be quarterbacks coach in 2010 under Lane Kiffin. In 2013, he was promoted to offensive coordinator. Helton served as the team's interim head coach during their bowl game after their previous interim head coach, Ed Orgeron, resigned following the hiring of Steve Sarkisian. On October 11, 2015, he once again became the interim head coach of the Trojans after head coach Steve Sarkisian took a leave of absence, and was then subsequently fired. On November 30, 2015, USC removed the interim tag and formally named Helton the 23rd head coach in school history. After Helton was named the permanent head coach, USC lost its final two games of the 2015 season to Stanford in the Pac-12 championship game and Wisconsin in the Holiday Bowl. In Helton's first full season as head coach, USC started off 1–3 with losses to Alabama, Stanford, and Utah, but then won its final eight games of the 2016 regular season as well as the Rose Bowl against Penn State to end the season with a record of 10–3 and third place in the AP poll.
On September 13, 2021, Helton was relieved of his duties at USC. His buyout was in the $12 million range. Including two stints as the interim head coach, Helton's record was 46-24 as the Trojans' coach, including a Rose Bowl win to cap the 2016 season. USC went 1-1 under Helton in the 2021-22 season.
Georgia Southern
On November 2, 2021, Helton was announced by Director of Athletics, Jared Benko, as the 11th head coach for Georgia Southern, replacing interim head coach Kevin Whitley. On December 15, 2021, Helton celebrated his first signing class as head coach of the Eagles.
Head coaching record
Notes
References
External links
USC profile
Memphis profile
Houston profile
1972 births
Living people
American football quarterbacks
Auburn Tigers football players
Duke Blue Devils football coaches
Georgia Southern Eagles football coaches
Houston Cougars football coaches
Memphis Tigers football coaches
Houston Cougars football players
USC Trojans football coaches
Duke University alumni
People from Gainesville, Florida
People from Sugar Land, Texas
Coaches of American football from Texas
Players of American football from Texas |
13227993 | https://en.wikipedia.org/wiki/OpenSolaris%20for%20System%20z | OpenSolaris for System z | OpenSolaris for System z is a discontinued port of the OpenSolaris operating system to the IBM System z line of mainframe computers.
History
OpenSolaris is based on Solaris, which was originally released by Sun Microsystems in 1991. Sun released the bulk of the Solaris system source code in OpenSolaris on 14 June 2005, which made it possible for developers to create other OpenSolaris distributions. Sine Nomine Associates began a project to bring OpenSolaris to the IBM mainframe in July, 2006. The project was named Sirius (in analogy to the Polaris project to port OpenSolaris to PowerPC). In April, 2007, Sine Nomine presented an initial progress report at IBM's System z Technical Expo conference.
At the Gartner Data Center Conference in Las Vegas, Nevada in late 2007, Sine Nomine demonstrated OpenSolaris running on IBM System z under z/VM. It was there that David Boyes of Sine Nomine stated that OpenSolaris for System z would be available "soon."
At the SHARE conference on 13 August 2008, Neale Ferguson of Sine Nomine Associates presented an update on the progress of OpenSolaris for System z. This presentation included a working demonstration of OpenSolaris for System z. During this presentation he stated that while OpenSolaris is "not ready for prime-time" they hoped to have a version available to the public for testing "in a matter of weeks rather than months."
In October, 2008, Sine Nomine Associates released the first "prototype" (it lacks a number of features such as DTrace, Solaris Containers and the ability to act as an NFS server) of OpenSolaris for System z to the public. OpenSolaris for System z has a project page at OpenSolaris.org. OpenSolaris for System z is available for download at no charge, and is governed by the same open source license terms as OpenSolaris for other platforms. All source code is available; there are no OCO (object code only) modules.
The port uses z/Architecture 64-bit addressing and therefore requires an IBM System z mainframe. Because the port depends on recently defined z/Architecture processor instructions, it requires a System z9 or later mainframe model and will not run on older machines. It also will not run on the release version of Hercules mainframe emulator, the needed changes are included in the SVN version 5470 of Hercules. It also requires the paravirtualization features provided by z/VM; it will not run on "bare metal" or in a logical partition (LPAR) without the z/VM hypervisor at Version 5.3 level or later. Also, because OpenSolaris uses a new network DIAGNOSE instruction, PTF VM64466 or VM64471 must be applied to z/VM to provide support for that instruction. On 18 November 2008, IBM authorized the use of IFL processors to run OpenSolaris for System z workloads.
The Register reported in March 2010 an email from an insider saying that:
See also
Linux on IBM Z
UTS (Mainframe UNIX)
References
External links
OpenSolaris Project: System z (source code and project home)
Sine Nomine Associates
OpenSolaris for System z Distribution (binary code download site) [!Page Not Found!]
OpenSolaris
IBM mainframe operating systems
VM (operating system) |
1849567 | https://en.wikipedia.org/wiki/Dennis%20Brown | Dennis Brown | Dennis Emmanuel Brown CD (1 February 1957 – 1 July 1999) was a Jamaican reggae singer. During his prolific career, which began in the late 1960s when he was aged eleven, he recorded more than 75 albums and was one of the major stars of lovers rock, a subgenre of reggae. Bob Marley cited Brown as his favourite singer, dubbing him "The Crown Prince of Reggae", and Brown would prove influential on future generations of reggae singers.
Biography
Early life and career
Dennis Brown was born on 1 February 1957 at the Victoria Jubilee Hospital in Kingston, Jamaica. His father Arthur was a scriptwriter, actor, and journalist, and he grew up in a large tenement yard between North Street and King Street in Kingston with his parents, three elder brothers and a sister, although his mother died in the 1960s. He began his singing career at the age of nine, while still at junior school, with an end-of-term concert the first time he performed in public, although he had been keen on music from an even earlier age, and as a youngster was a keen fan of American balladeers such as Brook Benton, Sam Cooke, Frank Sinatra, and Dean Martin. He cited Nat King Cole as one of his greatest early influences. He regularly hung around JJ's record store on Orange Street in the rocksteady era and his relatives and neighbours would often throw Brown pennies to hear him sing in their yard. Brown's first professional appearance came at the age of eleven, when he visited "Tit for Tat" a local West Kingston Nightclub where his brother Basil was performing a comedy routine, and where he made a guest appearance with the club's resident group, the Fabulous Falcons (a group that included Cynthia Richards, David "Scotty" Scott, and Noel Brown). On the strength of this performance he was asked to join the group as a featured vocalist. When the group performed at a JLP conference at the National Arena, Brown sang two songs – Desmond Dekker's "Unity" and Johnnie Taylor's "Ain't That Loving You" – and after the audience showered the stage with money, he was able to buy his first suit with the proceeds. Bandleader Byron Lee performed on the same bill, and was sufficiently impressed with Brown to book him to perform on package shows featuring visiting US artists, where he was billed as the "Boy Wonder".
As a young singer Brown was influenced by older contemporaries such as Delroy Wilson (whom he later cited as the single greatest influence on his style of singing), Errol Dunkley, John Holt, Ken Boothe, and Bob Andy. Brown's first recording was an original song called "Lips of Wine" for producer Derrick Harriott, but when this was not released, he recorded for Clement "Coxsone" Dodd's Studio One label, and his first session yielded the single "No Man is an Island", recorded when Brown was aged twelve and released in late 1969. The single received steadily increasing airplay for almost a year before becoming a huge hit throughout Jamaica. Brown recorded up to a dozen sessions for Dodd, amounting to around thirty songs, and also worked as a backing singer on sessions by other artists, including providing harmonies along with Horace Andy and Larry Marshall on Alton Ellis's Sunday Coming album. Brown was advised by fellow Studio One artist Ellis to learn guitar to help with his songwriting, and after convincing Dodd to buy him an instrument, was taught the basics by Ellis. These Studio One recordings were collected on two albums, No Man is an Island and If I Follow my Heart (the title track penned by Alton Ellis), although Brown had left Studio One before either was released. He went on to record for several producers including Lloyd Daley ("Baby Don't Do It" and "Things in Life"), Prince Buster ("One Day Soon" and "If I Had the World"), and Phil Pratt ("Black Magic Woman", "Let Love In", and "What About the Half"), before returning to work with Derrick Harriott, recording a string of popular singles including "Silhouettes", "Concentration", "He Can't Spell", and "Musical Heatwave", with the pick of these tracks collected on the Super Reggae and Soul Hits album in 1973. Brown also recorded for Vincent "Randy" Chin ("Cheater"), Dennis Alcapone ("I Was Lonely"), and Herman Chin Loy ("It's Too Late" and "Song My Mother Used to Sing") among others, with Brown still at school at this stage of his career.
International success
In 1972, Brown began an association that would result in his breakthrough as an internationally successful artist; He was asked by Joe Gibbs to record an album for him, and one of the tracks recorded as a result, "Money in my Pocket", was a hit with UK reggae audiences and quickly became a favourite of his live performances. This original version of "Money in my Pocket" was in fact produced by Winston "Niney" Holness on behalf of Gibbs, with musical backing from the Soul Syndicate. In the same year, Brown performed as part of a Christmas morning showcase in Toronto, Ontario, Canada, along with Delroy Wilson, Scotty, Errol Dunkley, and the Fabulous Flames, where he was billed as the "Boy Wonder of Jamaica" and was considered the star of the show in a local newspaper review. The song's popularity in the UK was further cemented with the release of a deejay version, "A-So We Stay (Money in Hand)", credited to Big Youth and Dennis Brown, which outsold the original single and topped the Jamaican singles chart. Brown and Holness became close, even sharing a house in Pembroke Hall. Brown followed this with another collaboration with Holness on "Westbound Train", which was the biggest Jamaican hit of summer 1973, and Brown's star status was confirmed when he was voted Jamaica's top male vocalist in a poll by Swing magazine the same year. Brown followed this success with "Cassandra" and "No More Will I Roam", and tracks such as "Africa" and "Love Jah", displaying Brown's Rastafari beliefs, became staples on London's sound system scene. In 1973, Brown was hospitalized due to fatigue caused by overwork, although at the time rumours spread that he only had one lung and had only a week to live, or had contracted tuberculosis. He was advised to take an extended break from performing and concentrated instead on his college studies.
Brown returned to music and toured the United Kingdom for the first time in late summer 1974 as part of a Jamaican showcase, along with Cynthia Richards, Al Brown, Sharon Forrester, and The Maytals, after which he was invited to stay on for further dates (where he was backed by The Cimarons, staying in the UK for another three months. While in the UK, he recorded for the first time since his hospitalization, working with producer Sydney Crooks, and again backed by the Cimarons. While Brown was in the UK, Gibbs released an album collecting recordings made earlier in Jamaica, released as The Best of Dennis Brown, and Brown's first single to get a proper UK release was issued on the Synda label – "No More Will I Roam". He returned to Jamaica for Christmas, but six weeks later was back in the UK, now with Holness in tow as his business manager, to negotiate a record deal with Trojan Records, the first Brown album to be released as a result being Just Dennis, although the pair would be left out of pocket after Trojan's collapse and subsequent buyout by Saga Records. On their return to Jamaica, Brown and Holness resumed recording in earnest with tracks for a new album, including "So Long Rastafari", "Boasting", and "Open the Gate". During 1975, Brown also recorded one-off sessions for Sonia Pottinger ("If You leave Me") and Bunny Lee ("So Much Pain", a duet with Johnny Clarke), and the first recordings began to appear on Brown's new DEB Music label. In the wake of the Trojan collapse, Brown and Holness arranged a deal with local independent label owners Castro Brown (who ran Morpheus Records) and Larry Lawrence (Ethnic Fight) to distribute their releases in the UK. Brown saw the UK as the most important market to target and performed for five consecutive nights at the Georgian Club in Croydon to raise funds to start his new DEB Music label with Castro Brown. In early 1976, Castro secured a deal with Radio London disc jockey Charlie Gillett for Morpheus (and hence DEB) output to be issued through the latter's Oval Records, which had a distribution deal with Virgin Records, but after a dispute over Castro's separate supply of these records to London record shops, the deal was scrapped and the early DEB releases suffered from a lack of promotion. Later that year, Brown voiced two tracks at Lee "Scratch" Perry's Black Ark studio, "Take a Trip to Zion" and "Wolf and Leopard", the latter of which was a hit in Jamaica and would prove to be one of Brown's most popular songs, with a lyric criticizing those criminals who "rode the natty dread bandwagon". Brown confirmed in an interview in Black Echoes that he had parted company with Holness, stating: "I was going along with one man's ideas for too long. Niney was trying to find a new beat at all times, which was disconcerting, so I hadn't been working with my true abilities. Now I know that I can produce myself."
Brown began working again with Joe Gibbs, with an agreement that in return for studio time for his own productions, Brown would allow Gibbs use of any rhythm recorded in the process. The first album from this arrangement, the 1977 release Visions of Dennis Brown, gave him his biggest success so far, blending conscious themes and love songs, and confirming Brown's transformation from child star to grown up artist. The biblical-themed sleeve and portrait of Haile Selassie on the back complemented the roots reggae tracks on the album, including "Repatriation", "Jah Can Do it", and cover versions of Earl 16's "Malcolm X" and Clive Hunt's "Milk and Honey". The album immediately entered the Black Echoes chart and stayed there well into the following year, although it was only available in the UK as an expensive import. Visions... was voted reggae album of the year by Melody Maker writers and was given the same award by readers of Black Echoes. A reissued "Wolf and Leopard" single, and the eventual album release of the same name also sold well in the UK, both topping the Black Echoes chart.
Brown toured the UK in Autumn 1977 with Big Youth, and described the tour: "It's like I was appointed to deliver certain messages and now is the time to deliver them". He had also begun producing recordings by his protege, Junior Delgado. In 1978, Brown moved to live in London, and set up premises in Battersea Rise, near Clapham Junction to relaunch the DEB Music label with Castro Brown, with artists featured on the label including Junior Delgado, 15.16.17, Bob Andy, Lennox Brown, and later, Gregory Isaacs. Brown had further success himself with a discomix of "How Could I Leave You", a version of The Sharks' rocksteady standard "How Could I Live" with accompanying toast by Prince Mohamed. In March 1978, Brown flew to Jamaica, where he was booked at the last minute to perform at the One Love Peace Concert at the National Arena, backed by Lloyd Parks' We The People Band. Visions of Dennis Brown was given a wider distribution via a deal between Lightning Records and WEA and topped the UK reggae album chart in September 1978, this chart run lasting for five months. In August 1978, Brown returned to the UK, bringing Junior Delgado with him, and DEB Music released a series of singles, although they sold moderately compared to the label's earlier successes, but in the same month, Brown's breakthrough single was first released. Initially released as a discomix featuring a new version of "Money in my Pocket" and the deejay version "Cool Runnings" by Price Mohamed, which became unavailable for a time after quickly selling out its first pressing, this single gave Brown his first UK Top 40 hit, reaching number 14 the following year and becoming one of the biggest international hits in Jamaica's history, after crossing over first into soul clubs and then rock clubs. This success led to Brown featuring on the cover of the NME in February 1979.
Brown's next two albums were both released on DEB – So Long Rastafari and Joseph's Coat of Many Colours, although the label was closed down in 1979, after which Brown again did the rounds of Jamaica's top producers, as well as continuing self-productions with singles such as "The Little Village" and "Do I Worry?" in 1981.
A&M and the dancehall era
With continuing commercial success, Brown signed an international deal with A&M Records in 1981, and now based permanently in the UK, his first album release for the label was the Gibbs-produced Foul Play, which while not wholly a success included the roots tracks "The Existence of Jah" and "The World is Troubled". This was followed in 1982 by Love Has Found its Way, a Gibbs/Brown/Willie Lindo production that blended lovers rock with a more pop sound, and again was not a great success. His final album with the label, 1983's The Prophet Rides Again, again mixed roots themes with commercial R&B style tracks, and proved to be his swansong with the label. While his association with A&M had taken him in a more commercial pop direction, Kingston's music scene had shifted towards the new dancehall era, and Brown enthusiastically adapted to the new sound, recording for some of the genre's major producers including Prince Jammy and Gussie Clarke. In the early 1980s he also started a new label, Yvonne's Special, dedicated to his wife. In 1984, he collaborated with Gregory Isaacs on the album Two Bad Superstars Meet and the hit single "Let aaf Sum'n", recorded with Sly & Robbie and Jammy, which was followed by a second album featuring the two stars, Judge Not, in 1985. Brown released a huge amount of work through the 1980s, including the 1986 Jammy-produced album The Exit, but his biggest success of the decade came in 1989 with the Gussie Clarke-produced duet with Isaacs "Big All Round", and the album Unchallenged. He continued to record prolifically in the 1990s, notably on the Three Against War album in 1995 with Beenie Man and Triston Palma, and on albums produced by Mikey Bennett, and his profile in the United States was raised by a series of album releases on RAS Records. In the late 1990s he was managed by Tommy Cowan, who contrasted Brown to Bob Marley, who he had also managed, stating "Bob Marley was a serious businessman, I don't think Dennis was as serious when it came to investment. Dennis was like a community person, he would earn money and in one hour he would give it away." Brown said of his approach to songwriting in the late 1990s:
"When I write a song I try to follow Joseph's way – deliverance through vision from all – true vibration. I want to be a shepherd in my work, teaching and learning, really singing so much. I don't want to sing and not live it. I must live it. If I can sing songs that people can watch me living, then they can take my work"
Brown's 1994 album Light My Fire was nominated for a Grammy Award, as was the last album recorded by Brown, Let Me Be the One (in 2001).
Death
In the late 1990s, Brown's health began to deteriorate. He had developed respiratory issues, probably exacerbated by longstanding problems with drug addiction, mainly cocaine, leading to him being taken ill in May 1999 after touring in Brazil with other reggae singers, where he was diagnosed with pneumonia. After returning to Kingston, Jamaica, on the evening of 30 June 1999, he was rushed to Kingston's University Hospital, suffering from cardiac arrest. Brown died the next day, the official cause of his death was a collapsed lung. Sitting Jamaican Prime Minister P. J. Patterson and former prime minister, serving at the time as opposition leader, Edward Seaga of the Jamaica Labour Party both spoke at Brown's funeral, which was held on 17 July 1999 in Kingston. The service, which lasted for three hours, also featured live performances by Maxi Priest, Shaggy, and three of Brown's sons. Brown was then buried at Kingston's National Heroes Park. Brown was survived by his wife Yvonne and ten children. Prime Minister Patterson paid tribute to Brown, saying: "Over the years, Dennis Brown has distinguished himself as one of the finest and most talented musicians of our time. The Crown Prince of Reggae as he was commonly called. He has left us with a vast repertoire of songs which will continue to satisfy the hearts and minds of us all for generations to come."
Dennis Brown's brother Leroy Clarke spoke about his brother as follows; "I just give Jah thanks and praise for Dennis’ life and what he has contributed to the world through the root of music, regardless of the rumors out there about him, he has done a lot. He has paid his dues. You want to know the true Dennis? Listen to his lyrics. He was singing from the heart" (The Beat, Volume 18, #5/6).
Legacy
Dennis Brown was an inspiration and influence for many reggae singers from the late 1970s through to the 2000s, including Barrington Levy, Junior Reid, Frankie Paul, Luciano, Bushman, and Richie Stephens. In July 1999, a group of UK-based musicians and more than fifty vocalists working under the collective name The British Reggae All Stars (including Mafia & Fluxy, Carlton "Bubblers" Ogilvie, Peter Hunnigale, Louisa Mark, Nerious Joseph, and Sylvia Tella) recorded "Tribute Song", a medley of six of Brown's best-known songs, in memory of Brown.
He was honoured on the first anniversary of his death by a memorial concert in Brooklyn, which featured performances from Johnny Osbourne, Micky Jarrett, Delano Tucker, and Half Pint. In 2001, a charitable trust was set up in Brown's name. The Dennis Emanuel Brown Trust works to educate youngsters, maintain and advance the memory of Dennis Brown, and help to provide youngsters with musical instruments. The trust awards the Dennis Emanuel Brown (DEB) bursary for educational achievement each year to students between the ages of 10 and 12 years. In 2005, George Nooks, who had worked with Brown in the mid-1970s in his deejay guise as Prince Mohamed, released an album of Brown covers, George Nooks Sings Dennis Brown: The Voice Lives On, with Nooks stating: "I was always inspired by his talent and I used to sing like him. Dennis had a large influence on me. To me he was the greatest. He was my number one singer." In the same year, Gregory Isaacs paid a similar tribute with the album Gregory Isaacs Sings Dennis Brown. In February 2007, a series of events were staged in Jamaica in celebration of the lives of both Brown and Marley (both would have had birthdays that month). In 2008, the Dennis Brown Trust announced a new internet radio station, dedicated solely to the music of Dennis Brown, and in the same month a tribute concert was staged by the Jamaican Association of Vintage Artistes and Affiliates (JAVAA) featuring Dwight Pinkney, Derrick Harriott, Sugar Minott, George Nooks, and John Holt.
Songs about or dedicated to Brown include "Song for Dennis Brown" by The Mountain Goats, "If This World Were Mine" by Slightly Stoopid, "Drive" by Pepper (band), and Whitney Houston's "Whitney Houston Dub Plate" on The Ecleftic: 2 Sides II a Book album by Wyclef Jean.
On 26 April 2010, Brown was featured on NPR Morning Edition news program as one of the "50 great voices – The stories of awe-inspiring voices from around the world and across time". The NPR "50 Great Voices" list includes Nat King Cole, Ella Fitzgerald, Mahalia Jackson and Jackie Wilson among others.
On 6 August 2011, being the 49th anniversary of the country's independence, the Governor-General of Jamaica posthumously conferred the Order of Distinction in the rank of Commander (CD) upon Brown, for his contribution to the Jamaican music industry.
In April 2012, a commemorative blue plaque was placed on Brown's home in Harlesden by the Nubian Jak Community Trust.
Discography
Studio albums
1970 – No Man is an Island (Studio One)
1971 – If I Follow My Heart (Studio One)
1972 – Super Reggae & Soul Hits (Crystal/Trojan)
1974 – The Best of Dennis Brown (Joe Gibbs) aka Best of Part 1 (1979, Joe Gibbs)
1975 – Deep Down (Observer), reissued in 1979 as So Long Rastafari (Harry J)
1975 – Just Dennis (Observer/Trojan)
1977 – Superstar (Micron)
1977 – Wolf & Leopards (DEB/Weed Beat)
1977 – Dennis Brown Meets Harry Hippy (Pioneer)(with Harry Hippy)
1978 – Westbound Train (Third World), aka Africa (Celluloid)
1978 – Visions of Dennis Brown (Joe Gibbs)
1979 – Joseph's Coat Of Many Colors (DEB)
1979 – Words of Wisdom (Joe Gibbs/Atlantic)
1980 – Spellbound (Joe Gibbs/Laser)
1981 – Money in My Pocket (Trojan)
1981 – Foul Play (Joe Gibbs/A&M)
1982 – Best Of Part 2 (Joe Gibbs)
1982 – Love Has Found Its Way (Joe Gibbs/A&M) (UK No. 72, US R&B #36)
1982 – More (Yvonne's Special)
1982 – Stage Coach Showcase (Yvonne's Special)
1982 – Yesterday, Today, & Tomorrow (Joe Gibbs)
1983 – Satisfaction Feeling (Yvonne's Special/Tad's)
1983 – The Prophet Rides Again (A&M)
1984 – Judge Not (with Gregory Isaacs) (Music Works/Greensleeves)
1984 – Two Bad Superstars (with Gregory Isaacs) (Burning Sounds)
1984 – Love's Got A Hold On Me (Joe Gibbs)
1984 – Revolution (Taxi/Yvonne's Special)
1984 – Reggae Super Stars Meet (with Horace Andy) (Striker Lee)
1985 – Slow Down (Jammy's/Greensleeves)
1985 – Wake Up (Natty Congo)
1985 – Wild Fire (with John Holt) (Natty Congo)
1986 – Brown Sugar (Taxi)
1986 – Baalgad (with Enos McLeod) (Goodies)
1986 – History (Live & Love)
1986 – Hold Tight (Live & learn)
1986 – The Exit (Jammy's)
1987 – So Amazing (with Janet Kay) (Trojan)
1987 – Visions (Shanachie)
1988 – Inseparable (WKS)
1989 – No Contest (with Gregory Isaacs) (Music Works/Greensleeves)
1989 – Death Before Dishonour (Tappa)
1989 – Good Vibrations (Yvonne's Special)
1990 – Over Proof (Two Friends/Greensleeves)
1990 – Unchallenged (Music Works/Greensleeves)
1990 – Reggae Giants (with Freddie McGregor) (Rocky One)
1990 – Sarge (Yvonne's Special)
1991 – Victory is Mine (Legga/RAS)
1992 – Another Day in Paradise (Trojan)
1992 – Beautiful Morning (World Record)
1992 – Blazing (Two Friends/Shanachie/Greensleeves)
1992 – Friends For Life (Black Scorpio/Shanachie)
1992 – Limited Edition (Artistic/VP/Greensleeves)
1992 – If I Didn't Love You
1992 – Cosmic (Observer)
1993 – Cosmic Force (Heartbeat)
1993 – The General (VP)
1993 – Legit (with Freddie McGregor & Cocoa Tea) (Greensleeves/Shanachie)
1993 – Rare Grooves Reggae Rhythm & Blues (Body Music/Yvonne's Special)
199? – Rare Grooves Reggae Rhythm & Blues vol. 2 (Yvonne's Special)
1993 – Songs of Emanuel (Yvonne's Special/Sonic Sounds)
1993 – Unforgettable (Jammy's)
1993 – Hotter Flames (with Frankie Paul) (VP)
1993 – Give Praises (Tappa)
1993 – It's The Right Time
1994 – 3 Against War (with Triston Palma & Beenie Man) (VP)
1994 – Blood Brothers (with Gregory Isaacs) (RAS)
1994 – Light My Fire (Heartbeat)
1994 – Nothing Like This (Greensleeves/RAS)
1994 – Party Time (with John Holt) (Sonic Sounds)
1994 – Vision of the Reggae King (Gold Mine/VP)
1995 – I Don't Know (Grapevine/Dynamite)
1995 – Temperature Rising (Trojan)
1995 – Dennis Brown and Friends (with Sugar Minott & Justin Hinds) (Jamaican Authentic Classics)
1995 – The Facts of Life (Diamond Rush)
1995 – You Got the Best of Me (Saxon)
1996 – Could It Be (VP)
1996 – Lovers Paradise (House of Reggae)
1996 – Milk & Honey (RAS)
1997 – Meet at the Penthouse (with Leroy Smart) (Rhino)
1998 – One of a Kind (Imaj)
1999 – Believe in Yourself (Don One/TP)
1999 – Bless Me Jah (RAS/Charm)
1999 – Generosity (Gator)
Posthumous releases and compilations
1983 – The Best of Dennis Brown (Blue Moon)
1987 – Greatest Hits (Rohit)
1987 – My Time (Rohit)
1990 – Go Now (Rohit)
1991 – Classic Gold (Rocky One)
1992 – Kollection (Gong Sounds)
1992 – Some Like It Hot (Heartbeat)
1992 – Classic Hits (Sonic Sounds)
1993 – Best Of – Musical Heatwave 1972–75 (Trojan)
1993 – 20 Magnificent Hits (Thunderbolt)
1993 – It's the Right Time (Rhino)
1994 – The Prime of Dennis Brown (Music Club)
1994 – Early Days (Sonic Sounds)
1995 – Africa – the Best of Dennis Brown vol. 1 (Esoldun)
1995 – Travelling Man – the Best of Dennis Brown vol. 2 (Esoldun)
1995 – Open The Gate – Greatest Hits Volume II (Heartbeat)
1995 – Joy in the Morning (Lagoon)
1996 – Hit After Hit (Rocky One)
1996 – The Very Best of Dennis Brown (Rhino)
1996 – Love & Hate: The Best of Dennis Brown (VP)
1996 – The Crown Prince (World Records)
1997 – Money in My Pocket (Delta Music)
1997 – Maximum Replay (Gone Clear)
1997 – Ras Portraits (RAS)
1997 – Reggae Max (Jet Star)
1998 – The Prime of Dennis Brown (Music Club)
1998 – Watch This Sound (Jamaican Gold)
1998 – Lovers Paradise (Time Music)
1998 – Tracks of Life (Snapper)
1999 – The Godlike Genius of Dennis Brown (Dressed to Kill)
1999 – Reggae Legends vol. 2 (Artists Only)
1999 – In the Mood (Charly)
1999 – Greatest Hits (Charly)
1999 – Love is So True (Prism)
1999 – Stone Cold World (VP)
1999 – Ready We Ready (Super Power)
1999 – Tribulation (PDG/Heartbeat)
1999 – The Great Mr Brown
2000 – May Your Food Basket Never Empty (RAS)
2000 – Reggae Trilogy (with Glen Washington & Gregory Isaacs) (J&D)
2000 – We are all One (J&D)
2000 – The Crown Prince (Metro)
2000 – Let Me be the One (VP)
2001 – Cassandra (Starburst)
2001 – Love's Got a Hold on You (Artists Only)
2001 – Money in My Pocket: Anthology (Trojan)
2001 – Any Day Now (Heartbeat)
2001 – Essential (Next Music)
2001 – Archives (Trojan)
2001 – The Prime of Dennis Brown (Music Club)
2002 – Dennis Brown In Dub (with Niney the Observer) (Rounder/Heartbeat)
2002 – You Satisfy My Soul (Fat Man)
2002 – Memorial: Featuring John Holt (Jetstar)
2002 – The Promised Land 1977–79 (Blood & Fire)
2002 – Winning Combinations (with Bunny Wailer) (Universal)
2002 – Memorial (Jetstar)
2002 – Forever Dennis (Jetstar/Reggae Road)
2003 – The Complete A&M Years (A&M)
2003 – Dennis Brown Sings Gregory Isaacs (RAS)
2003 – Crown Prince (Trojan)
2004 – Dennis Brown Conqueror: An Essential Collection (Burning Bush)
2005 – Money in My Pocket: The Definitive Collection (Trojan)
2005 – Sings Revival Classics (Cousins)
2005 – At the Foot of the Mountain (Charm)
2006 – Sledgehammer Special (with King Tubby)
2006 – Taxi 3 Trio (with Gregory Isaacs & Sugar Minott) (Taxi)
2008 – A Little Bit More: Joe Gibbs 12" Selection 1978–1983 (VP)
2010 – The Crown Prince Of Reggae: Singles (1972–1985) Reggae Anthology (#10 US Reggae)
2020 – Dennis (Burning Sounds) . Vinyl - Originally released in 1983
Live albums
1979 – Live in Montreux (Laser/Joe Gibbs)
1987 – In Concert (Ayeola)
1992 – Live in Montego Bay (Sonic Sounds)
2000 – Academy (Orange Street)
2001 – Best of Reggae Live (Innerbeat)
2001 – Best of Reggae Live vol. 2 (Innerbeat)
2003 – Live in New York (Ital International)
DVD and Video
The Living Legend (VHS; Keeling Videos)
Rock Steady Roll Call (VHS; Ruff Neck)
Stars in the East (with John Holt) (VHS/DVD; Ruff Neck)
Inseparable volumes 1–4 (4 VHS volumes (199?)/2 DVD volumes (2004); Ruff Neck)
Live at Montreux (1996; DVD; Synergie)
Hits After Hits (2001; DVD; Keeling Videos)
Live at Reggae Ganfest (2003; DVD; Contreband)
Productions of other artists
1977 – Various Artists – Black Echoes
1978 – The DEB Music Players – Umoja
1978 – The DEB Music Players – 20th Century DEB-Wise
1979 – The DEB Music Players – DJ Tracking
1979 – Junior Delgado – Effort
1979 – Junior Delgado – Taste of the Young Heart
1981 – Junior Delgado – More She Love It
1982 – Junior Delgado – Bush Master Revolution
1985 – Various Artists – 4 Star Showcase
1996 – Various Artists – Return to Umoja
International hit singles
"Money In My Pocket" (1979) – UK No. 14
"Love Has Found Its Way" (1982) – UK No. 47, US R&B No. 42
"Senorita" (1988) – UK No. 95
Notes
References
Adebayo, Dotun (1999), "Dennis Brown: Child prodigy of Jamaican music and Bob Marley's chosen successor, he was brought low by drugs", The Guardian, 3 July 1999
Campbell. Howard (2009), "Remembering the Crown Prince", Jamaica Gleaner, 25 June 2009
Chang, Kevin O'Brien, & Chen, Wayne (1998), Reggae Routes: The Story of Jamaican Music, Ian Randle Publishers,
Cooke, Mel (2008), "Dennis Brown honoured in song", Jamaica Gleaner, 25 February 2008
Cooksey, Gloria, "Dennis Brown Biography" MusicianGuide.com, accessed 10 December 2007
Doran, D'Arcy (1999), "Toronto Fans Mourn Reggae Star's Death", Toronto Star, 6 July 1999
Evans, Tanio (2007), " Artistes pay tribute to Marley, Brown", Jamaica Gleaner, 12 February 2007
Foster, Chuck (1999), Roots Rock Reggae: an Oral History of Reggae Music from Ska to Dancehall, Billboard Books,
Greene, Jo-Ann, "[ Dennis Brown: Biography]", Allmusic, accessed 22 November 2007
Jackson, Kevin (2005), "Catch the Riddim: George Nooks pays tribute to Dennis Brown", Jamaica Observer, 22 August 2005
Johnson, Christopher (2010), "Dennis Brown: The 'Crown Prince' Of Reggae", NPR, 26 April 2010
Kenner, Rob (2001), "Boom Shots", Vibe, April 2001, p. 171
Moskowitz, David V. (2006), Caribbean Popular Music: an Encyclopedia of Reggae, Mento, Ska, Rock Steady, and Dancehall, Greenwood Press,
Reel, Penny (2000), Deep Down with Dennis Brown, Drake Bros,
Roberts, David (2006), British Hit Singles & Albums, 19th edn., Guinness World Records Limited, London,
Salewicz, Chris (1999), "Obituary: Dennis Brown", The Independent, 3 July 1999
Simmonds, Jeremy (2008), The Encyclopedia of Dead Rock Stars: Heroin, Handguns, and Ham Sandwiches, Chicago Review Press,
Thompson, Dave (2002), Reggae & Caribbean Music, Backbeat Books,
Walker, Klive (2006), Dubwise: Reasoning from the Reggae Underground, Insomniac Press,
Walters, Basil (2005), "19 students receive Dennis Brown scholarships", Jamaica Observer, 18 February 2005
Walters, Basil (2008), "Dennis Brown 24-hour Internet radio coming on stream", Jamaica Observer, 1 February 2008
"Dennis Brown: a pioneer and cultural icon", Jamaica Gleaner, 2 July 1999
"VH1.com : Dennis Brown: Reggae Fans Honor Dennis Brown", VH1
External links
Discography at Roots Archives
Discography of 1970s recordings & dub sources at X Ray Music
Interview by Roger Steffens
Dennis Brown at Discogs
The Dennis Emanuel Brown Trust
45cat discography
1957 births
1999 deaths
Musicians from Kingston, Jamaica
Lovers rock musicians
Jamaican reggae singers
Deaths from pneumothorax
Trojan Records artists
Commanders of the Order of Distinction
20th-century Jamaican male singers
VP Records artists
Heartbeat Records artists
Greensleeves Records artists |
63421530 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20A41 | Samsung Galaxy A41 | The Samsung Galaxy A41 is a mid-range Android smartphone developed by Samsung Electronics as part of their 2020 A-series smartphone lineup. It was announced on 18 March 2020, and first released on 22 May 2020 as the successor to the Galaxy A40. The phone comes preinstalled with Android 10 and Samsung’s custom One UI 2.1 software overlay.
Specifications
Hardware
The Galaxy A41 is equipped with a MediaTek Helio P65 chipset, with 64 GB of storage and 4 GB of RAM, as well as a dedicated slot for microSD and a dual nano SIM slot with supports VoLTE. Storage can be expanded up to 512 GB via a microSDXC card.
The phone has a 6.1-inch, FHD+ Super AMOLED display, with a screen-to-body ratio of 85.9% and an aspect ratio of 20:9 to match that of other Galaxy smartphones sold in 2020. An optical, under-display fingerprint reader replaces the rear-mounted one seen on the A40.
The new L-shaped rear camera system (similar to the ones seen on newer Samsung phones) utilizes three cameras, a 48 MP wide lens, an 8 MP ultrawide lens and a 5 MP depth sensor. A U-shaped screen cut-out houses the 25 MP sensor for the front-facing camera. Both camera systems are capable of recording 1080p video at 30fps.
A 3500 mAh battery is used, with support for fast charging at up to 15 W.
Customers, depending on the region, can choose from a range of new colour selections, such like Black, Haze Silver, Prism Blue and Aura Red.
Software
The phone comes with Android 10 and Samsung’s custom One UI 2.1 software overlay. Depending on the region, it can support contactless NFC payments through Samsung Pay and other various payment apps that can be installed separately.
Software experience is comparable to that of other 2020 Samsung devices, and it boasts many of the software perks costlier Samsung devices boast, such as Edge Screen and Edge Lighting.
As with most other Samsung phones released during 2020, the Microsoft-Samsung partnership Link to Windows option which comes as standard, can be accessed in the Android notification panel.
Banking on Samsung’s latest software update schedule, the phone should be eligible for two major Android upgrades.
References
External links
Official website
Android (operating system) devices
Smartphones
Samsung Galaxy
Samsung mobile phones
Mobile phones introduced in 2020
Mobile phones with multiple rear cameras |
49474181 | https://en.wikipedia.org/wiki/Zephyr%20%28operating%20system%29 | Zephyr (operating system) | Zephyr is a small real-time operating system (RTOS) for connected, resource-constrained and embedded devices (with an emphasis on microcontrollers) supporting multiple architectures and released under the Apache License 2.0. Zephyr includes a kernel, and all components and libraries, device drivers, protocol stacks, file systems, and firmware updates, needed to develop full application software.
History
Zephyr originated from Virtuoso RTOS for digital signal processors (DSPs). In 2001, Wind River Systems acquired Belgian software company Eonic Systems, the developer of Virtuoso. In November 2015, Wind River Systems renamed the operating system to Rocket, made it open-source and royalty-free. Compared to Wind River's other RTOS, VxWorks, Rocket had a much smaller memory needs, especially suitable for sensors and single-function embedded devices. Rocket could fit into as little as 4 KB of memory, while VxWorks needed 200 KB or more.
In February 2016, Rocket became a hosted collaborative project of the Linux Foundation under the name Zephyr. Wind River Systems contributed the Rocket kernel to Zephyr, but still provided Rocket to its clients, charging them for the cloud services. As a result, Rocket became "essentially the commercial version of Zephyr".
Since then, early members and supporters of Zephyr include Intel, NXP Semiconductors, Synopsys, Linaro, Texas Instruments, DeviceTone, Nordic Semiconductor, Oticon, and Bose.
, Zephyr had the largest number of contributors and commits compared to other RTOSes (including Mbed, RT-Thread, NuttX, and RIOT).
Features
Zephyr intends to provide all components needed to develop resource-constrained and embedded or microcontroller-based applications. This includes, but is not limited to:
A small kernel
A flexible configuration and build system for compile-time definition of required resources and modules
A set of protocol stacks (IPv4 and IPv6, Constrained Application Protocol (CoAP), LwM2M, MQTT, 802.15.4, Thread, Bluetooth Low Energy, CAN)
A virtual file system interface with several flash file systems for non-volatile storage (FATFS, LittleFS, NVS)
Management and device firmware update mechanisms
Configuration and build system
Zephyr uses Kconfig and devicetree as its configuration systems, inherited from the Linux kernel but implemented in the programming language Python for portability to non-Unix operating systems. The RTOS build system is based on CMake, which allows Zephyr applications to be built on Linux, macOS, and Microsoft Windows.
Kernel
Early Zephyr kernels used a dual nanokernel plus microkernel design. In December 2016, with Zephyr 1.6, this changed to a monolithic kernel.
The kernel offers several features that distinguish it from other small OSes:
Single address space
Multiple scheduling algorithms
Highly configurable and modular for flexibility, with resources defined at compile-time
Memory protection unit (MPU) based protection
Asymmetric multiprocessing (AMP, based on OpenAMP) and symmetric multiprocessing (SMP) support
Security
A group is dedicated to maintaining and improving the security. Also, being owned and supported by a community means the world's open source developers are vetting the code, which significantly increases security.
See also
Embedded operating system
References
External links
ARM operating systems
Embedded operating systems
Free software operating systems
Linux Foundation projects
Real-time operating systems
Software using the Apache license |
8376591 | https://en.wikipedia.org/wiki/Michael%20Tomczyk | Michael Tomczyk | Michael S. Tomczyk is best known for his role in guiding the development and launch of the first microcomputer to sell one million units, as Product Manager of the Commodore VIC-20. His contributions are described in detail in his 1984 book, THE HOME COMPUTER WARS: An Insider's True Account of Commodore and Jack Tramiel. His role is also documented extensively in numerous interviews and articles. The VIC-20 was the first affordable, full-featured color computer and the first home computer to be sold in KMart and other mass market outlets. Michael joined Commodore in April 1980 as Assistant to the President (Commodore Founder Jack Tramiel who appointed him VIC-20 Product Manager). He has been called the "marketing father" of the home computer. Michael was also a pioneer in telecomputing, as co-designer of the Commodore VICModem, which he conceived and contracted while at Commodore. The VICModem was the first modem priced under $100 and the first modem to sell one million units.
Michael is also an authority on nanotechnology. He is the author of the 2016 book, NANOINNOVATION: What Every Manager Needs to Know (Wiley, 2016) and in 2016 he served on the NNI Review Committee (National Academy of Sciences) which reviewed the billion-dollar US National Nanotechnology Initiative, to recommend changes and improvements to this initiative. He has also written book chapters and articles on the future of biosciences, gene therapy and medical innovations.
During his career, he has studied and developed best practices and strategies for managing radical/disruptive innovations, as a product manager/technology developer, senior business executive, consultant and academic program manager. For 18 years (1995-2014) he provided managerial leadership in the study of best practices and strategies for managing innovation at The Wharton School, University of Pennsylvania; where he served as Managing Director of the Emerging Technologies Management Research Program (1994-2001), Mack Center for Technological Innovation (2001-2013) and Mack Institute for Innovation Management (2013-2014). He retired from the University of Pennsylvania in 2014 and served as Innovator in Residence in the ICE Center at Villanova University (2014-2017) where he hosted an annual event called the Innovation Update Day.
Michael continues to be an innovation leader. He is currently Senior Advisor to FAMA Financial Holdings, a FinTech venture focused on developing mobile money platforms and applications. In Fall 2021 he became a founding director of a Fintech Ecosystem Development Corporation, a developer of global mobile payment services and digital banking innovations.
He is co-moderator of the Commodore International Historical Society site on Facebook and is on the science advisory board at VIGAMUS in Rome.
Education
He holds an MBA. from UCLA and a BA from the University of Wisconsin–Oshkosh, where he received a Distinguished Alumni Award. He earned a master's degree in environmental studies from the University of Pennsylvania in May 2010.
Military service
Michael Tomczyk served three years in the U.S. Army (1970-73 - highest rank Captain), working for military commands including the XVIII Airborne Corps (Ft. Bragg), 1st Signal Brigade (Vietnam) and USASTRATCOM/United Nations Command (Korea). As Public Information Officer at Fort Bragg, he helped launch the Volunteer Army (VOLAR) which was being piloted in 1970. He experienced combat and was awarded the Bronze Star for meritorious service in Vietnam (1971–72). He received the Army Commendation Medal for service in Korea (1973). He served in the Army Reserve after active duty.
Commodore Business Machines
In early 1980, Tomczyk joined Commodore as Marketing Strategist and Assistant to the President (Commodore founder Jack Tramiel). When Tramiel announced that he wanted to develop a low cost affordable home computer "for the masses, not the classes," Tomczyk embraced the concept and aggressively championed the new computer. The concept for the new computer was born at a Commodore management conference at the Fox and Hounds estate outside London, England, the first week in April 1980. Despite his status as the newest member of the management team, Tomczyk vigorously championed the home computer and on his return to Santa Clara, California, he wrote a 30-page single spaced memo to Tramiel, detailing specifications, pricing, features and design innovations that he thought should be included. Tramiel was impressed and put Tomczyk in charge of guiding the development and market development of the new computer.*
Tomczyk named the new computer the "VIC-20" and set the price at $299.95. Tomczyk was given the additional title of "VIC Czar" (at a time when Washington had an "Energy Czar").
Tomczyk recruited a product management team he called the "VIC Commandos" and implemented a variety of innovations including a unique user manual, programming reference guide (which he co-authored), software on tape and cartridge, as well as a distinctive array of packaging, print ads and marketing materials. His motto for the VIC Commandos was "Benutzerfreundlichkeit" which means User Friendliness in German. The new computer was introduced at Seibu Department Store as the VIC-1001 in Tokyo in September 1980, and as the VIC-20 at the Consumer Electronics Show in 1981; and subsequently in Canada, Europe and Asia. The VIC-20 became the first microcomputer to sell one million units.
Star Trek star William Shatner was recruited to promote the new home computers. Tomczyk jokes that he was the first person to show Shatner how to use a real computer, as the technology on the Star Trek sets were simply mock-ups.
In 1981 Tomczyk established the Commodore Information Network, an early implementation of a user community. He contracted the engineering for (and co-designed) the VICModem, which became the first modem priced under $100, and the first to sell one million units. To promote telecomputing, he negotiated free telecomputing services from CompuServe, The Source (online service) and Dow Jones. In 1982, the Commodore network was the largest traffic "site" on CompuServe. The Commodore Information Network has been called an early Internet style user community, before innovations in the graphic user interface brought the Internet to life.
The VIC-20 was followed by the more powerful Commodore 64. These computers introduced millions of people worldwide to home computing and telecomputing, and laid the foundation for ubiquitous worldwide computing. Tomczyk's experiences are described in his 1984 book, "The Home Computer Wars."
After Commodore
Tomczyk left Commodore in 1984, six months after Jack Tramiel left the company. He subsequently served as a consultant to technology startups and international trade projects. He was a contributing editor of Export Today Magazine for nearly 10 years, and authored more than 150 articles including computer magazine columns (Compute! and Compute!'s Gazette, 1980–85), a business newspaper column, book chapters and numerous magazine articles.
In December 2012, Tomczyk spoke at the launch the Video Game Museum (ViGaMus) in Rome, Italy, which has an exhibit of vintage Commodore computers with stories and photos of Jack Tramiel and Tomczyk.
Michael continues to pioneer in the field of radical innovation. In 2020 and 2021 he helped launch 2 Fintech startups (FamaCash and Fintech Ecosystem Development Corp.)
Wharton
In 1995 he joined the Wharton School as Managing Director of the Emerging Technologies Management Research Program at the Wharton School, where he provided managerial leadership and worked with a core group of faculty to develop what has been called the world's leading academic center studying best practices and strategies for managing innovation. In 2001, the ET Program was expanded to the Mack Center for Technological Innovation, which in 2013 became the Mack Institute for Innovation Management, which is supported by a $50 million endowment. Michael left the Mack Institute in October 2013 and retired from the University of Pennsylvania in June 2014.
As Managing Director of the Mack Institute, Tomczyk served as a bridge between academia and industry partners. For more than 12 years he hosted an annual event he originated, called the Emerging Technologies Update Day, which showcased radical innovations looming on the near horizon that had the potential to transform industries and markets. In 2000 he helped launch the BioSciences Crossroads Initiative and in 2006 co-authored (with Paul J. H. Schoemaker) a major research report entitled: "The Future of BioSciences: Four Scenarios for 2020 and Their Implications for Human Healthcare" (May 2006). He has written articles about gene therapy, Internet applications, and many other technologies. Michael edited the Mack Institute's website and an electronic newsletter; taught sessions on radical innovation in the Wharton Executive Education Program and taught classes in Wharton's MBA program and at the UPENN School of Engineering. For almost a decade he served on the Commercialization Core committee developing translational medicine at the University of Pennsylvania Medical School.
While at Wharton, Tomczyk helped launch five successful technology startups, as an advisor and/or board member. During the 1990s he helped corporations develop and implement their Internet/ecommerce strategies. Throughout his career, he has advised numerous companies and government agencies on international technology projects and the impact of disruptive technologies. He has keynoted numerous industry events on emerging technologies and radical innovation. For several years, he served on the advisory group for the Advanced Computing department at Temple University.
In the early 2000s he developed an interest in nanotechnology, which he felt most business leaders did not yet understand. He served on the leadership committee for the IEEE/IEC initiative which developed standards for Nanotechnology and is a founding strategic advisor of the Nanotechnology Research Foundation. His interest in nanotechnology led him to write a book entitled "NanoInnovation: What Every Manager Needs to Know," published by Wiley in December 2014. As part of his research for the book, he interviewed more than 150 nanotechnology scientists, entrepreneurs and leaders in business and government.
In July 2014, Michael was appointed Innovator in Residence at Villanova University's ICE Center (Innovation, Creativity and Entrepreneurship). At Villanova, he hosts innovation events, teaches and advises students, and works with industry partners. In December 2014 he designed and co-hosted the first annual Villanova Innovation Update Day, a showcase for emerging technologies and applications that are changing industries and markets.
In June 2015, he was appointed to the 15-member Triennial Review Committee that will review and provide recommendations for the National Nanotechnology Initiative.
Publications
Michael began his career as a journalist and has published more than 150 articles, including a monthly column (as Contributing Editor) for Export Today; a column on BASIC programming for Compute's Gazette (The VIC Magician).; a business how-to column for the West Chester Daily News; and articles for Associated Press, the New York Times, Stars and Stripes, and many other publications. In 2005 he co-edited (with Paul Schoemaker) a 134-page research report entitled: The Future of BioSciences: Four Scenario for 2020 and Their Implications for Human Healthcare. In 2011, he authored a chapter entitled "Applying the Marketing Mix (5 P's) to Bionanotechnology" in the book "Biomedical Nanotechnology" (Springer 2011). His memoir (THE HOME COMPUTER WARS, 1984) has become a collectible. His book, NanoInnovation: What Every Manager Needs to Know (Wiley, 2014) is the first in a series of books focusing on technological innovation. He contributed an essay to AFTER SHOCK, a new book (Jan 2020) commemorating the 50th anniversary of Alvin Toffler's classic book, FUTURE SHOCK. In January 2021 he contributed a book chapter to Digital Transformation in a Post-Covid World (2021/CRC Press) entitled: DOMINO EFFECT: How Pandemic Chain Reactions Disrupted Companies and Industries.
Notes and references
Tomczyk, Michael (2014). NanoInnovation: What Every Manager Needs to Know. Wiley, .
Villanova University, (2014). The ICE Center Welcomes Michael Tomczyk as Innovator in Residence.
Commodore Legends: Michael Tomczyk Part I (2011). MOS 6502 Blog Interview.
Tomczyk, Michael (2011). "Applying the Marketing Mix (5 P's) to Bionanotechnology". Biomedical Nanotechnology: Methods and Protocols, Sarah Hurst (ed.), .
Schoemaker, Paul J.H. and Tomczyk, Michael (2006). The Future of BioSciences: Four Scenarios for 2020 and Their Implications for Human Healthcare.
Tomczyk, Michael (1984). The Home Computer Wars: An Insider's Account of Commodore and Jack Tramiel. COMPUTE! Publications, Inc. .
A.Persidis and M.Tomczyk, Critical Issues in Commercialization of Gene Therapy. Nature Biotechnology, Vol 15, p. 689-690, 1997.
References
External links
Frost & Sullivan summary
1996 Interview
Commodore and Japan/2004 Interview
OldComputers.net Commodore VIC-20
Michael Tomczyk Commodore/Homepage
NanoInnovation: What Every Manager Needs to Know
Commodore people
Businesspeople in advertising
American businesspeople
American technology writers
American magazine editors
United States Army officers
Living people
Year of birth missing (living people) |
7324925 | https://en.wikipedia.org/wiki/Composite%20UI%20Application%20Block | Composite UI Application Block | The Composite UI Application Block (or CAB) is an addition to Microsoft's .NET Framework for creating complex user interfaces made of loosely coupled components. Developed by Microsoft's patterns & practices team, CAB is used exclusively for developing Windows Forms. A derivative version of CAB exists in both the Web Client and Mobile Client Software Factories as well. It encourages the developer to use either the Model-View-Controller or Model-View-Presenter architectural pattern, to encourage reuse of the individual User Controls (referred to in CAB as "SmartParts") by not coupling them with their underlying data elements or presentation code.
It is part of the foundation of the Smart Client Software Factory, another patterns & practices deliverable. It is also part of the Mobile Client Software Factory which is a version of the Smart Client Software Factory for use with the .NET Compact Framework 2.0.
See also
Software Factories
External links
CAB Home page
Smart Client Software Factory Home page
Microsoft software factories
Software architecture |
25666237 | https://en.wikipedia.org/wiki/Id%20Tech%206 | Id Tech 6 | id Tech 6 is a multiplatform game engine developed by id Software. It is the successor to id Tech 5 and was first used to create the 2016 video game Doom. Internally, the development team also used the codename id Tech 666 to refer to the engine. The PC version of the engine is based on Vulkan API and OpenGL API.
John Carmack started talking about his vision regarding the engine that would succeed id Tech 5 years before the latter debuted in Rage, but following his departure from id Software in 2014, Tiago Sousa was hired to replace him as the lead renderer programmer at the company.
On June 24, 2009, id Software was acquired by ZeniMax Media. It was later announced in 2010 that id Software's technology would be available only to other companies also belonging to ZeniMax Media.
Preliminary information
In 2008 and while id Tech 5 had yet to be fully formed, John Carmack said the next engine by id Software would be looking towards a direction where ray tracing and classic raster graphics would be mixed. The engine would work by raycasting the geometry represented by voxels (instead of triangles) stored in an octree. Carmack claimed that this format would also be a more efficient way to store the 2D data as well as the 3D geometry data, because of not having packing and bordering issues. The goal of the engine would be to virtualize geometry the same way that id Tech 5 virtualized textures. This would be a change from past engines which for the most part use mesh-based systems. However, he also explained during QuakeCon 08, that the hardware that would be capable of id Tech 6 did not yet exist at the time.
In July 2011, Carmack explained that id Software was beginning research for the development of id Tech 6. It's unknown if Carmack's vision of the engine at the time was still the same he described in 2008.
Technology
An early version of the fourth main Doom game was being built on id Tech 5 but id Software restarted development in late 2011 to early 2012, after Bethesda expressed concerns about its creative and technological direction. When development was restarted it was decided to begin with the id Tech 5-based Rage codebase but take "big leaps back in certain areas of tech" and "[merge] Doom features to Rage".
Doom was first shown to the public during QuakeCon 2014, where it was confirmed it was running on an early version of id Tech 6. The developers' goals when creating the engine were described as being able to drive good looking games running at 1080p on 60 fps but also reintroduce real-time dynamic lighting which was largely removed from id Tech 5. The engine still uses virtual textures (dubbed "MegaTextures" in id Tech 4 and 5) but they are of higher quality and no longer restrict the appearance of realtime lighting and shadows. Physically based rendering has also been confirmed. A technical analysis of Doom found that the engine supports motion blur, bokeh depth of field, HDR bloom, shadow mapping, lightmaps, irradiance volumes, image-based lighting, FXAA, volumetric lighting/smoke, destructible environments, Water Physics, Skin sub-surface scattering, SMAA and TSSAA anti-aliasing, directional occlusion, screen space reflections, normal maps, GPU accelerated particles which are correctly lit and shadowed, triple buffer v-sync which acts like fast sync, unified volumetric fog (every light, shadow, indirect lighting affects it, including water caustics / underwater light scattering), tessellated water surface (on the fly without GPU tessellation. Caustics are dynamically generated and derived from water surface), and chromatic aberration. On July 11, 2016, id Software released an update for the game that added support for Vulkan.
Following Carmack's departure from id Software, Tiago Sousa, who had worked as the lead R&D graphics engineer of several versions of the CryEngine at Crytek, was hired to lead development of the rendering. Bethesda's Pete Hines has commented that while id Tech 6 reuses code written by Carmack, most of the decisions made about the engine's direction were taken after he left.
Games using id Tech 6
Doom (2016) – by id Software
Dishonored 2 (2016) – Arkane Studios
Dishonored: Death of the Outsider (2017) – Arkane Studios
Wolfenstein II: The New Colossus (2017) – by MachineGames
Doom VFR (2017) – by id Software
Wolfenstein: Youngblood (2019) – by MachineGames and Arkane Studios
Wolfenstein: Cyberpilot (2019) – by MachineGames and Arkane Studios
See also
First-person shooter engine
id Tech 5
id Tech 7
List of game engines
List of first-person shooter engines
References
2016 software
3D graphics software
Game engines that support Vulkan (API)
Global illumination software
Id Tech
Video game engines |
40445182 | https://en.wikipedia.org/wiki/Graduate%20University%20of%20Advanced%20Technology | Graduate University of Advanced Technology | The Graduate University of Advanced Technology (GUAT) is an advanced research center and graduate-level degree-granting institution in Kerman, Iran. It was founded in 2007. It is also known as Kerman Graduate University of Technology (KGUT). The main campus is located in Mahan, Kerman.
GUAT is considered to be one of the most productive research centers of the country. It offers advanced degrees (master's and Ph.D.) in Electrical, Civil, Mechanical, Chemical, Material, Mineral and Photonic Engineering.
Faculties and Colleges
Colleges bring together academics and students from a broad range of disciplines, and within each faculty or department within the university, academics from many different colleges can be found.
Faculty of Electrical and Computer Engineering (Home page)
Department of Telecommunications and Electronics Engineering
Department of Power and Control Engineering
Department of Computer Engineering and Information Technology
Faculty of Mechanical and Materials Engineering (Home page)
Department of Energy Conversion Engineering
Department of Design and Manufacturing Engineering
Department of Material Engineering
Faculty of Chemistry and Chemical Engineering (Home page)
Department of Chemistry
Department of Chemical Engineering
Department of Polymer Engineering
Faculty of Civil Engineering and Geodesy (Home page)
Department of Surveying Engineering
Department of Earthquake and Geotechnical Engineering
Department of Water Engineering
Faculty of Sciences & Modern Technologies (Home page)
Department of Photonics
Department of Earth Science
Department of Nanotechnology
Department of Applied Mathematics
Department of Biomedical Engineering
Department of Nuclear Engineering
The university also consists of several Research Institutes.
Environmental Sciences Research Center
Environmental Sciences Research Center (ESRC) was established as a "fundamental research" division of the ICST in 2001. ESRC has four research departments including Biotechnology, Ecology, Biodiversity and Environment. The aim here is to conduct research in different fields of environmental science with the objective of reducing the adverse effects of technological developments on environment; and development of methods that help to utilize natural resources more appropriately for a sustained development. Detailed aims and objectives are included in department’s homepage. The Environmental Sciences Research Institute has well equipped laboratories and welcomes collaborative efforts with other research centers and universities. The Fourth National Biotechnology Congress in 2005, the Second National Cell and Molecular Biology in 2008 and the First National Meeting on Herpetology in 2009 were held by the Environmental Sciences Research Institute. In addition, several workshops on stem cell culture, PCR, real-time PCR and research methods in biological sciences have also been held by ESRC.
Photonics Research Center
The Photonics Research Center (PRC) comprises four research groups, namely laser, optical fiber, semiconductors, and nano-photonics. With a set of pre-determined goals, PRC started its activities in 2001. At present, PRC is engaged in education and research activities with its full-time, visiting, and part-time faculty members.
Materials Research Center
The Materials Research Center (MRC) received its license and authority from Ministry of Science, Research and Technology in 2002. MRC was established in the form of three research groups of metals, ceramics, and new materials; with the goal of providing infrastructure for research projects and training of materials specialists. MRC is committed to scientific dissemination and communication with other research centers, according to Center's by-laws.
Energy Research Center
Energy provision is man's oldest problem and presently it is reaching a crisis a point. There are numerous challenges in this regard, and most industrialized countries have engaged the issue with all their economical and technological mights. In order to create the appropriate infrastructure that is needed to conduct research projects and train specialists, the Energy Research Center (ERC) has divided its activities into three areas of 1-Energy Optimization and Management, 2-Renewable Energy, and 3-Fuel Cell, Hydrogen, and Energy transform, since 2004. ERC received an agreement in principle for its activities from Iran’s Ministry of Science, Research and Technology in August 2005.
Information Technology Research Center
Activities of this group are towards reaching Center's goals on development of the science and technology in field of Information Technology (IT). These activities are divided into two areas of research and higher education while giving priority to research along the following axioms: Carrying out fundamental, applied, and developing research efforts related to IT, and obtaining advanced technologies in this field. Offering educational workshops, scientific seminars, and research courses. Instrumentation and maintenance of research laboratories in regards to IT; and consultation services to research centers, universities and industrial centers.
The Kerman Science and Technology Park has been located in the university [2].
Address: Knowledge Paradise, End of Haft Bagh-e-Alavi Highway, Kerman, Iran.
Telephone: +98 34 3377 6610,+98 34 3377 6611,+98 34 3377 6612,+98 34 3377 6613.
References
External links
[1] Kerman Graduate University of Technology
[2] Kerman Science and Technology Park
Education in Kerman Province
Educational institutions established in 2007
Universities in Iran
Buildings and structures in Kerman Province
2007 establishments in Iran |
67371635 | https://en.wikipedia.org/wiki/Luv-Kush%20equation | Luv-Kush equation | The Luv-Kush equation is a political term used in the context of the politics of Bihar, to denote the alliance of the agricultural Kurmi and the Koeri caste, which together constitutes approximately 15% of the state's population. The alliance of these two caste groups has remained the support base of Nitish Kumar, as against the MY equation of Lalu Prasad Yadav, which constitutes Muslims and the Yadavs. Caste consciousness and the quest for political representation largely drive the politics of Bihar. The political alliance of the Koeri and the Kurmi castes, termed the "Luv-Kush equation" was formed when a massive Kurmi Chetna Rally (Kurmi consciousness rally) was organised by members of the Kurmi community in 1994 against the alleged casteist politics of Lalu Yadav, who was blamed by contemporary community leaders for promoting Yadavs in politics and administration.
Unlike the Yadavs, the Kushwahas (Koeris), who are traditionally a farming community, are very like the Kurmis with whom they have a better social alliance than with the Yadavs. Since the 1990s, Nitish Kumar had garnered the support of a number of Kushwaha leaders including Shakuni Choudhury, Nagmani and Upendra Kushwaha. The call for unity between the two castes by Kumar in the 1990s gave rise to the new social alliance and a new term in the political lexicon of Bihar.
Etymology
The Backward Castes in Bihar are divided into two Annexures. While the Annexure-I castes are socio-economically more backward and are also called Extremely Backward Castes, the castes included in Annexure-II are comparatively prosperous. Annexure-I contains approximately 113 caste groups, while there are only four caste groups in Annexure-II: Koeri, Kurmi, Vaishya and the Yadav. The Yadav make up the single largest caste group in Bihar, followed by Koeris, who comprise 8% of the state's population. The Kurmi make up four per cent of the state's population and are concentrated around Nalanda, Patna and a few pockets of central Bihar. The Koeris, who are more numerous than the Kurmis, are distributed more heterogeneously across Munger, Banka, Khagaria, Samastipur, East Champaran, West Champaran and Bhojpur district.
The Koeris were basically vegetable growing farmers, unlike the Kurmis, who grow staple crops and food grains. The Koeris claim descent from the mythological Hindu deity Kusha, a son of Rama, an incarnation of lord Vishnu. The Kumis claim their descent from Kusha's twin brother, Lava. Pseudo-historical facts were used to forge an alliance between the two caste groups in 1993, when Nitish Kumar, earlier an ally of Lalu Prasad Yadav, caused a split in his political party to form his own Samata Party.
In the 1930s, the three numerically important caste groups of Bihar: Yadav, Kurmi and the Koeri formed a political party called the Triveni Sangh to challenge the over-representation of the Forward Castes in politics. From 1990, the political scenario in the state changed and led to the fall of the upper castes from power They were now replaced by the upwardly mobile backward caste groups like these three agricultural castes. This led to the formation of a Yadav dominated state government in the 1990s under Lalu Prasad Yadav. It had the support of the other two caste groups, which together constituted the Triveni Sangh. According to political theorists of Bihar, the Yadav, though numerically superior, fell behind the Kushwaha (Koeri) and the Kurmis in terms of education and in other spheres of life. This caused dissension between them, and the latter refused to accept the leadership of the Yadavs. Nitish Kumar is said to have utilised this dissension in the early 1990s to break the hold of Lalu Prasad Yadav over a section of the Backward Castes.
Despite having a cultivator background, the twin castes of the Koeri and the Kurmi have many differences. The Kurmis had manned key bureaucratic positions in the 1960s and 70s and have remained far ahead of other backward castes, both socio-economically and educationally. The Koeris have remained comparatively backward, much like the Yadavs. The socio-economic ascendancy of the Kurmis led them to join the ranks of landlords. Consequently, they were involved in the formation of a private army called the Bhumi Sena, which was known for perpetrating massacres of the Dalits and other atrocities. In contrast, the Koeris have always remained at the forefront in the battle of weaker sections against the landlords. Jagdeo Prasad was a well-known Koeri leader, noted for championing the cause of the lower strata of society.
Alliance
1990-2000
After Lalu Yadav had assumed the premiership of the Bihar as a leader of the Janata Dal in 1990, he took several bold steps, which were welcomed by the downtrodden communities. There was strong support for the Yadav's government from the Other Backward Class (OBC) Yadav community to which he belonged and from Muslims. They saw in him a saviour after the arrest of Lal Krishna Advani, the Bharatiya Janata Party leader, in Samastipur. He was undertaking a massive Ram Rath Yatra, which was polarising the state along religious lines. He also revised the Karpoori Formula, a scheme drawn up by former chief minister Karpoori Thakur, which provided three per cent of government jobs and educational institutions were reserved for members of the Forward Castes and a separate three per cent for women. These went to upper castes, in case suitable women candidates could not be found. Yadav abolished the upper-caste quota and reduced the women's quota to two per cent. Four per cent of the extracted quota from this rearrangement was distributed equally among both the Extremely Backward Class and the Upper Backward Castes.
During this period, Yadav's charismatic personality led him to believe he was the sole leader in Janta Dal. To an extent this was true, given his hold over the poor and rural people of Bihar and the lower castes and the minorities. Yadav sidelined other leaders, and the party witnessed a period of dominance by his fellow caste men. The dominance of Yadav's people from the cadres to the higher party positions at the cost of other aspirational backward castes created dissension in their ranks. The growing face-offs led to a split in the Janata Dal in 1994, when Nitish Kumar and George Fernandes formed the Samata Party, which was supported by the other leaders of the Koeri and the Kurmi castes. In the 1995 elections to the Bihar Legislative Assembly, there were two rival factions, one dominated by the Yadavs under the leadership of Lalu Prasad, and the other dominated by the Koeri-Kurmi community. In the 1996 general election to the Lok Sabha, the Samata Party formed an alliance with the Bharatiya Janata Party, which was popular among the upper caste and urban population of the Bihar. The Samta Party's performance in the 1995 elections for the Bihar Assembly was poor. It won only seven seats, while the Bharatiya Janata Party emerged as the main opposition party against the Janata Dal with 41 seats. Lalu Prasad emerged victorious with Janata Dal getting 167 seats. The only impact of the Nitish Kumar factor was the loss of some votes of the Koeri-Kurmi community, while the lower caste supported the Janta Dal firmly.
After the 1995 elections, the upper-caste alliance with the Koeri-Kurmi community was strengthened around the common question of challenging the dominance of the Yadav caste and Lalu Prasad, who allegedly made insulting comments against them in the 1995 election campaign. The Kurmi Chetna Rally, which led to cultivation of alliance between the upwardly mobile Koeri and the Kurmi castes, was preceded by other caste-based rallies, which were organised by other caste groups like the Nishad, and the Dhanuk, motivated to flaunt the might of their respective castes. Senior leaders of various political parties often addressed these rallies to garner support from the castes.
Background to the Kurmi Chetna Rally (1994)
The Kurmi Chetna Rally (transl: Kurmi consciousness rally) was a historic movement in the politics of Bihar, which led to the formation of new caste coalitions and the degradation of the Yadav caste dominance under Lalu Yadav. The rally was the brainchild of some of the non-notable community leaders, and the aim was nonpolitical. It was being seen by the community leaders as a forum to raise political consciousness among community members under the banner of "All India Kurmi Kshatriya Mahasabha". Some leaders of the Kurmi community like Satish Kumar, who organised the event, found it difficult to get the consent of Nitish Kumar, who was reluctant to attend the great rally. He was concerned public support for it was unclear and it could be a failure. Some leaders who were amongst the coordinators of this event also revealed later that the Bhumihars, a caste which opposed Lalu Prasad during his political ascendency, were supporting the rally implicitly. The organisers used the posters of national level leaders like Uma Bharti and Sharad Pawar, but the day preceding the organization of the main event, Nitish Kumar issued a statement against it and refused to attend.
Kumar had been critical of Yadav's policies. Some gave influential Yadavs high-value government contracts and other opportunities which came with the administrative apparatus. Relatives of Lalu Prasad, including his brothers-in-law Sadhu Yadav and Subhash Prasad Yadav, took the lead in high-value business with the support of government. In 1993, at a memorial ceremony for Karpoori Thakur, Nitish warned Yadav against these policies which undermined the interests of non-Yadav OBCs, but no solution was found. Sankarshan Thakur, who wrote the biography of Nitish Kumar, wrote:
According to Sankarshan Thakur, there was an explicit indication from the Lalu Prasad that any step taken by Kumar to support or join the rally would result in his expulsion from the party, which Kumar understood. Hence, he hesitated even after being invited by other community leaders who had gathered at the Gandhi Maidan. Earlier, it was thought the public and even the Kurmi community were not paying any attention to the rally. This proved to be wrong. On the day of the rally, many Kurmi caste men gathered at the Gandhi Maidan. In his hesitation whether to attend the rally, Kumar stopped at the residence of Vijay Krishna, a leader and an erstwhile ally of Lalu Prasad Yadav, who had resigned from his ministry after a few skirmishes over their ideological differences with Yadav.
Krishna realised the only way to pose a political challenge to Yadav was by harnessing the votes of the disgruntled Kurmi community. After a long discussion, Krishna got Kumar's consent, and the decision was made to speak directly against the Janata Dal and Lalu Prasad Yadav. After defending Yadav for some time, Kumar talked about the over-representation of some castes at the cost of others and put forward his anti-government feelings to the mob. Reiterating his words, he asked the crowd to reject such a government, which was not conscious of the rights of its own people. The leaders gathering at this rally also appealed for the alliance of the Kurmi and the Koeri castes. The roti-beti relationship (sharing of food and daughter) between the two was also proposed to consolidate the ties.
Electoral performance of the Samata Party (1994-2005)
The Samata Party, which came into existence after the Kurmi Chetna Rally, was the new rival front and principal opponent of the Lalu Yadav-led Janata Dal party. Members of the Koeri and Kurmi castes dominated it, and itcame to be known as the "party of Koeri-Kurmi".
The 1995 elections in Bihar witnessed a complete marginalisation of the forward castes from Bihar's political scene. The two chief rival fronts contesting the elections were the Janata Dal with its allies, led by Yadavs, and the Samta Party, led by the leaders of the Koeri and Kurmi communities under the leadership of Lalu's former partner, Nitish Kumar. The pre-election scenario was one of confusion over the alliance and partnership with the Indian National Congress (INC) and the Bharatiya Janata Party (BJP) contesting individually, while the Samata Party was aligned only to the Communist Party of India (Marxist–Leninist) (CPI (ML)). The Janata Dal under Lalu was aligned with its traditional partners like the Jharkhand Mukti Morcha, CPI and CPM.
Beginning of the post-Mandal era (2005 elections)
In the 2005 assembly elections in Bihar, the National Democratic Alliance (NDA) obtained a majority of seats, winning 144 of 243 seats. Nitish Kumar, who had ambitions for the chief ministerial post, was elected as the new chief minister by the NDA leadership. After becoming leader, Nitish gave adequate representation to the Extremely Backward Castes (EBC) by including them in his ministry. In the Lalu-Rabri era, the representation of the EBC was only 2.1% in government, which climbed to 15% of the cabinet during this period. The elections of 2005 also witnessed the highest ever representation among the winning candidates of the Koeri and Kurmi castes, who constituted the core of the Janata Dal (United) electorate. The twin castes benefited most from the victory of the JD(U)-led NDA. The JD(U) had been formed as a result of the merger of the Samata Party with the Sharad Yadav faction of the Janata Dal, and had allied with the BJP to replace Rashtriya Janata Dal, the new name of Lalu Prasad's party, which had broken away from the Janata Dal to emerge as a separate entity. The coming to power of the National Democratic Alliance government in Bihar did not challenge the monopoly of the OBCs in the politics of Bihar. Christophe Jaffrelot points to the phenomenon of a further strengthening of OBCs at the forefront of Bihari politics.
Alliance after 2010
In later years, the politics of the state remained caste driven but with the emergence of multi-polar fronts. The new parties, which were formed in a later period, commanded the loyalty of specific caste groups represented by the senior leaders of the party. Ram Vilas Paswan emerged as the leader of the Dalits with his Lok Janshakti Party. The upper caste, who by now had become the core supporters of the BJP, cultivated an alliance with the Koeri-Kurmi community, which had remained the core supporters of the JD(U) against the Rashtriya Janata Dal (RJD) and constituted the bedrock of the ruling NDA.
By 2010, Nitish had cultivated an alliance with many Koeri leaders. For instance, the newly elected member of Legislative Assembly from the Jandaha constituency, Upendra Kushwaha, was promoted against all odds. In 2004, when the Samata Party merged with JD(U) to make it the largest opposition party in Bihar, Kushwaha was promoted as leader of the opposition. However, the ambitious Kushwaha caused a split in the party to form his own Rashtriya Samata Party in 2007. The poor electoral performance by this newly formed party made Kushwaha merge once again with the parent JD(U) in 2009. This time, the disgruntled Kushwaha was sent to Rajya Sabha by the JD(U) to gain his confidence. Though, Kushwaha had remained an old partner of Nitish and had been with him from the time of Jayprakash Narayan, only to make his electoral debut in 2000, in 2013, he caused a split in JD(U) again and floated his Rashtriya Lok Samata Party.
Considering the hold of Upendra Kushwaha on the Koeri caste, the Bharatiya Janata Party allied with him for the 2014 General elections to the Lok Sabha. The elections proved to be an eye-opener for the other parties in the state. With the help of its new allies, BJP was successful in forming a new voter base in the state. The Kurmi community support, which is enjoyed by the JD(U), is 2.5% of the state's population while the Koeris were significantly 7%. The failure of Nitish Kumar to ally with other important castes, apart from the expulsion of Kushwaha, led to the massive BJP victory. The Rashtriya Lok Samata Party won the three seats it contested.
There was a split in the National Democratic Alliance in Bihar during this period. The JD(U) was running in the 2015 Bihar Legislative Assembly elections along with its all-time rival, the RJD of Lalu Prasad. The cause of this fissure was the poor performance of the JD(U) in the 2014 Lok Sabha elections and the ambitions of Nitish Kumar, who wanted to be the NDA's candidate for prime minister. He was dropped in favour of the more popular Narendra Modi. Nitish parted ways to join with his all-time rival RJD, which had been out of power in Bihar, for more than a decade. The alliance, called Mahagathbandhan, was formed with the Indian National Congress also a member. BJP had to satisfy itself with the RLSP and the Lok Janshakti Party as allies. The social engineering, as it was called, to rope in various castes and communities, led to the victory of Mahagathbandhan. The BJP and its alliance fared badly in the polls. The "caste divide" was considered the reason behind the results, and as the BJP is considered the party of the Upper Caste, it was natural for it to lose ground in a state dominated by Backwards. The Rashtriya Lok Samata Party of Upendra Kushwaha, which claimed to represent the Koeri caste, was unable to break the Luv-Kush equation—the hold of Nitish Kumar on the two caste group votes—and won only two seats.
Mahagathbandhan's victory led to the formation of an RJD-JD(U) government in the state for the first time in 2015. One noticeable impact of the RJD-JD(U) alliance was the massive increase in the number of Koeri, Kurmi and Yadav legislators in the state at the cost of the upper castes, who were reduced to their lowest share in the assembly for the first time. The Upendra Kushwaha lobby could not secure the Koeri votes in favour of the BJP. The BJP later recognised that Nitish was still a commanding authority over the support of both communities. Even after taking populist steps like the celebration of Samrat Ashoka Jayanti under the leadership of the Koeri leaders of the BJP, and promptly supporting the pseudo-historical claims of the Koeri caste of having Mauryan lineage, the community voted for Mahagathbandhan. Recognising the importance of Kumar, the BJP brought him into the NDA's fold, after his conflict with the RJD during the few months of combined government. The government fell after the JD(U) left Mahagathbandhan, and the NDA government of the JD(U)-BJP was formed again in the state in 2017. Upendra Kushwaha, who had ambitions for the post of chief minister, found it would be difficult to obtain after the return of Nitish Kumar into the NDA. The 2019 General Election to the Lok Sabha saw hue and cry in the national media over Kushwaha's anti coalition steps. He met Lalu Yadav, who was undergoing treatment in the hospital, and the media revealed several skirmishes in the NDA over the issue of seat sharing. The BJP was accused by its small allies like the Rashtriya Lok Samata Party and the Lok Janshakti Party of not giving them due importance in the distribution of seats.
Amidst these insecurities, Kushwaha left the NDA. The decision to leave came after his controversial "Kheer remark", through which he sought to achieve the alliance of the Koeri and the Yadav castes against the natural alliance of the Kurmi and the Koeris. Since its formation in 2013, the Rashtriya Lok Samata Party had attracted several big Koeri leaders to it, including Shri Bhagwan Singh Kushwaha and Nagmani to become a party dominated by the Koeris. However, the BJP-JDU, with their social engineering, were able to disrupt the social coalition of the new Mahagathbandhan (Grand Alliance), which included the Rashtriya Janata Dal and the Rashtriya Lok Samata Party, among others.
Alliance after 2019
After the poor performance of the Mahagathbandhan in the 2010 General Election to the Lok Sabha, Kushwaha got an opportunity to drop the Grand Alliance. The pretext sought was the leadership of Tejashwi Yadav as the chief minister of Bihar from the alliance camp. Kushwaha repudiated this as he sought another nominee for the same post. After internal discussions, he decided not to join either the Mahagathbandhan or the National Democratic Alliance, but chose to run for the 2020 Bihar Legislative Assembly elections alone in an alliance with a few minor players like All India Majlis-e-Ittehadul Muslimen, considered to have the support of radical Muslims and the Bahujan Samaj Party, a significant player in Uttar Pradesh. The new alliance was called the Grand Democratic Secular Front and was eyeing the votes of the Kushwahas, Muslims and Dalits. The alliance chose Upendra Kushwaha as their chief ministerial candidate and was to put up firm resistance to both the NDA and Mahagathbandhan blocs.
The poll strategy of RLSP, which was leading this bloc, was to collect the votes of the Koeri or the Kushwaha community. Intending to rope in the second largest community of Bihar, 40% of the seats of the RLSP share went to the candidate from this caste group. Janata Dal (United), which relies upon same social coalition, took steps to consolidate its "Luv-Kush equation", by giving a significant number of tickets to the Koeri and Kurmi castes. The poll result went against the expectations of RLSP and Upendra Kushwaha. It performed badly, winning no seats, but its hold over the Koeri caste hurt the JD(U) in over a dozen constituencies. It ended up scoring up to 30,000 votes in some of the Kushwaha-dominated seats of Bihar and significantly reduced the JDU's clout in the assembly elections. The JDU now became the junior partner in the NDA against the BJP. Its seats in the assembly were reduced from 75 in the 2015 Bihar Assembly Elections to 43 in the 2020 elections. Questions were raised in the NDA bloc over the leadership and efficacy of Nitish Kumar, but despite of all odds, the NDA chose him to be the chief minister of Bihar again. The impact of the JD(U)-RLSP was also seen in the massive reduction in the number of Kushwaha and Kurmi legislators in the newly elected assembly.
The Kumar and Kushwaha later realised that the alliance of both parties could be beneficial, and proposals were sent to RLSP leaders to merge with the JDU. The RLSP was initially inclined to become a part of the NDA but was not in favour of the merger. But after the meetings with the JD(U) leadership, it was formally merged into the JD(U). Upendra Kushwaha was given the post of president of the Parliamentary Board of the JDU and was nominated for Member of Legislative Council's post by the party. Before the merger, a faction of the RLSP, under one of its leaders Virendra Kushwaha, was merged with the Rashtriya Janata Dal. After merging with the RLSP, Nitish set out to consolidate its old social coalition of the Koeri and Kurmi castes. Ramchandra Prasad Singh, who belonged to the Awadhiya Kurmi community, was made national president of the party, while Umesh Singh Kushwaha, a young leader from Mahnar, was made the party's Bihar state president.
References
External links
Upendra Kushwaha Ups Political Stock with 'Luv-Kush' Social Engineering, Miffed Nitish Looks to BJP |
69122776 | https://en.wikipedia.org/wiki/Vector%20overlay | Vector overlay | Vector overlay is an operation (or class of operations) in a geographic information system (GIS) for integrating two or more vector spatial data sets. Terms such as polygon overlay, map overlay, and topological overlay are often used synonymously, although they are not identical in the range of operations they include. Overlay has been one of the core elements of spatial analysis in GIS since its early development. Some overlay operations, especially Intersect and Union, are implemented in all GIS software and are used in a wide variety of analytical applications, while others are less common.
Overlay is based on the fundamental principle of geography known as areal integration, in which different topics (say, climate, topography, and agriculture) can be directly compared based on a common location. It is also based on the mathematics of set theory and point-set topology.
The basic approach of a vector overlay operation is to take in two or more layers composed of vector shapes, and output a layer consisting of new shapes created from the topological relationships discovered between the input shapes. A range of specific operators allows for different types of input, and different choices in what to include in the output.
History
Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects. Warren Manning appears to have used this approach to compare aspects of Billerica, Massachusetts, although his published accounts only reproduce the maps without explaining the technique. Jacqueline Tyrwhitt published instructions for the technique in an English textbook in 1950, including:
Ian McHarg was perhaps most responsible for widely publicizing this approach to planning in Design with Nature (1969), in which he gave several examples of projects on which he had consulted, such as transportation planning and land conservation.
The first true GIS, the Canada Geographic Information System (CGIS), developed during the 1960s and completed in 1971, was based on a rudimentary vector data model, and one of the earliest functions was polygon overlay. Another early vector GIS, the Polygon Information Overlay System (PIOS), developed by ESRI for San Diego County, California in 1971, also supported polygon overlay. It used the Point in polygon algorithm to find intersections quickly. Unfortunately, the results of overlay in these early systems was often prone to error.
In 1975, Thomas Peucker and Nicholas Chrisman of the Harvard Laboratory for Computer Graphics and Spatial Analysis introduced the POLYVRT data model, one of the first to explicitly represent topological relationships and attributes in vector data. They envisioned a system that could handle multiple "polygon networks" (layers) that overlapped by computing Least Common Geographic Units (LCGU), the area where a pair of polygons overlapped, with attributes inherited from the original polygons. Chrisman and James Dougenik implemented this strategy in the WHIRLPOOL program, released in 1979 as part of the Odyssey project to develop a general-purpose GIS. This system implemented several improvements over the earlier approaches in CGIS and PIOS, and its algorithm became part of the core of GIS software for decades to come.
Algorithm
The goal of all overlay operations is to take in vector layers, and create a layer that integrates both the geometry and the attributes of the inputs. Usually, both inputs are polygon layers, but lines and points are allowed in many operations, with simpler processing.
Since the original implementation, the basic strategy of the polygon overlay algorithm has remained the same, although the vector data structures that are used have evolved.
Given the two input polygon layers, extract the boundary lines.
Cracking part A: In each layer, identify edges shared between polygons. Break each line at the junction of shared edges and remove duplicates to create a set of topologically planar connected lines. In early topological data structures such as POLYVRT and the ARC/INFO coverage, the data was natively stored this way, so this step was unnecessary.
Cracking part B: Find any intersections between lines from the two inputs. At each intersection, split both lines. Then merge the two line layers into a single set of topologically planar connected lines.
Assembling part A: Find each minimal closed ring of lines, and use it to create a polygon. Each of these will be a least common geographic unit (LCGU), with at most one "parent" polygon from each of the two inputs.
Assembling part B: Create an attribute table that includes the columns from both inputs. For each LCGU, determine its parent polygon from each input layer, and copy its attributes into the LCGU's row the new table; if was not in any of the polygons for one of the input layers, leave the values as null.
Parameters are usually available to allow the user to calibrate the algorithm for a particular situation. One of the earliest was the snapping or fuzzy tolerance, a threshold distance. Any pair of lines that stay within this distance of each other are collapsed into a single line, avoiding unwanted narrow sliver polygons that can occur when lines that should be coincident (for example, a river and a boundary that should follow it de jure) are digitized separately with slightly different vertices.
Operators
The basic algorithm can be modified in a number of ways to return different forms of integration between the two input layers. These different overlay operators are used to answer a variety of questions, although some are far more commonly implemented and used than others. The most common are closely analogous to operators in set theory and boolean logic, and have adopted their terms. As in these algebraic systems, the overlay operators may be commutative (giving the same result regardless of order) and/or associative (more than two inputs giving the same result regardless of the order in which they are paired).
Intersect (ArcGIS, QGIS, Manifold, TNTmips; AND in GRASS): The result includes only the LCGUs where the two input layers intersect (overlap); that is, those with both "parents." This is identical to the set theoretic intersection of the input layers. Intersect is probably the most commonly used operator in this list. Commutative, associative
Union (ArcGIS, QGIS, Manifold, TNTmips; or in GRASS): The result includes all of the LCGUs, both those where the inputs intersect and where they do not. This is identical to the set theoretic union of the input layers. Commutative, associative
Subtract (TNTmips; Erase in ArcGIS; Difference in QGIS; not in GRASS; missing from Manifold): The result includes only the portions of polygons in one layer that do not overlap with the other layer; that is, the LCGUs that have no parent from the other layer. Non-commutative, non-associative
Exclusive or (Symmetrical Difference in ArcGIS, QGIS; Exclusive Union in TNTmips; XOR in GRASS; missing from Manifold): The result includes the portions of polygons in both layers that do not overlap; that is, all LCGUs that have one parent. This could also be achieved by computing the intersection and the union, then subtracting the intersection from the union, or by subtracting each layer from the other, then computing the union of the two subtractions. Commutative, associative
Clip (ArcGIS, QGIS, GRASS, Manifold; Extract Inside in TNTmips): The result includes the portions of polygons of one layer where they intersect the other layer. The outline is the same as the intersection, but the interior only includes the polygons of one layer rather than computing the LCGUs. Non-commutative, non-associative
Cover (Update in ArcGIS and Manifold; Replace in TNTmips; not in QGIS or GRASS): The result includes one layer intact, with the portions of the polygons of the other layer only where the two layers do not intersect. It is called "cover" because the result looks like one layer is covering the other; it is called "update" in ArcGIS because the most common use is when the two layers represent the same theme, but one represents recent changes (e.g., new parcels) that need to replace the older ones in the same location. It can be replicated by subtracting one layer from the other, then computing the union of that result with the original first layer. Non-commutative, non-associative
Divide (Identity in ArcGIS and Manifold; not in QGIS, TNTmips, or GRASS): The result includes all of the LCGUs that cover one of the input layers, excluding those that are only in the other layer. It is called "divide" because it has the appearance of one layer being used to divide the polygons of the other layer. It can be replicated by computing the intersection, then subtracting one layer from the other, then computing the union of these two results. Non-commutative, non-associative
Boolean overlay algebra
One of the most common uses of polygon overlay is to perform a suitability analysis, also known as a suitability model or multi-criteria evaluation. The task is to find the region that meets a set of criteria, each of which can be represented by a region. For example, the habitat of a species of wildlife might need to be A) within certain vegetation cover types, B) within a threshold distance of a water source (computed using a buffer), and C) not within a threshold distance of significant roads. Each of the criteria can be considered boolean in the sense of Boolean logic, because for any point in space, each criterion is either present or not present, and the point is either in the final habitat area or it is not (acknowledging that the criteria may be vague, but this requires more complex fuzzy suitability analysis methods). That is, which vegetation polygon the point is in is not important, only whether it is suitable or not suitable. This means that the criteria can be expressed as a Boolean logic expression, in this case, H = A and B and not C.
In a task such as this, the overlay procedure can be simplified because the individual polygons within each layer are not important, and can be dissolved into a single boolean region (consisting of one or more disjoint polygons but no adjacent polygons) representing the region that meets the criterion. With these inputs, each of the operators of Boolean logic corresponds exactly to one of the polygon overlay operators: intersect = AND, union = OR, subtract = AND NOT, exclusive or = XOR. Thus, the above habitat region would be generated by computing the intersection of A and B, and subtracting C from the result.
Thus, this particular use of polygon overlay can be treated as an algebra that is homomorphic to Boolean logic. This enables the use of GIS to solve many spatial tasks that can be reduced to simple logic.
Lines and points
Vector overlay is most commonly performed using two polygon layers as input and creating a third polygon layer. However, it is possible to perform the same algorithm (parts of it at least) on points and lines. The following operations are typically supported in GIS software:
Intersect: The output will be of the same dimension as the lower of the inputs: Points * {Points, Lines, Polygons} = Points, Lines * {Lines, Polygons} = Lines. This is often used as a form of spatial join, as it merges the attribute tables of the two layers analogous to a table join. An example of this would be allocating students to school districts. Because it is rare for a point to exactly fall on a line or another point, the fuzzy tolerance is often used here. QGIS has separate operations for computing a line intersection as lines (to find coincident lines) and as points.
Subtract: The output will be of the same dimension as the primary input, with the subtraction layer being of the same or lesser dimension: Points - {Points, Lines, Polygons} = Points, Lines - {Lines, Polygons} = Lines
Clip: While the primary input can be points or lines, the clipping layer is usually required to be polygons, producing the same geometry as the primary input, but only including those features (or parts of lines) that are within the clipping polygons. This operation might also be considered a form of spatial query, as it retains the features of one layer based on its topological relationship to another.
Union: Normally, both input layers are expected to be of the same dimensionality, producing an output layer including both sets of features. ArcGIS and GRASS do not allow this option with points or lines.
Implementations
Vector Overlay is included in some form in virtually every GIS software package that supports vector analysis, although the interface and underlying algorithms vary significantly.
Esri GIS software has included polygon overlay since the first release of ARC/INFO in 1982. Each generation of Esri software (ARC/INFO, ArcGIS, ArcGIS Pro) has included a set of separate tools for each of the overlay operators (Intersect, Union, Clip, etc.). The current implementation in ArcGIS Pro recently added an alternative set of "Pairwise Overlay" tools (as of v2.7) that uses parallel processing to more efficiently process very large datasets.
GRASS GIS (open source), although it was originally raster-based, has included overlay as part of its vector system since GRASS 3.0 (1988). Most of the polygon overlay operators are collected into a single v.overlay command, with v.clip as a separate command.
QGIS (open source) originally incorporated GRASS as its analytical engine, but has gradually developed its own processing framework, including vector overlay.
Manifold System implements overlay in its transformation system.
The Turf Javascript API includes the most common overlay methods, although these operate on individual input polygon objects, not on entire layers.
TNTmips includes several tools for overlay among its vector analysis process.
References
External links
The Overlay toolset documentation in Esri ArcGIS
v.overlay command documentation in GRASS GIS
Vector Overlay documentation in QGIS
Topology Overlays documentation in Manifold
GIS software
Geographic information systems |
346547 | https://en.wikipedia.org/wiki/Reusability | Reusability | In computer science and software engineering, reusability is the use of existing assets in some form within the software product development process; these assets are products and by-products of the software development life cycle and include code, software components, test suites, designs and documentation. The opposite concept of reusability is leverage, which modifies existing assets as needed to meet specific system requirements. Because reuse implies the creation of a , it is preferred over leverage.
Subroutines or functions are the simplest form of reuse. A chunk of code is regularly organized using modules or namespaces into layers. Proponents claim that objects and software components offer a more advanced form of reusability, although it has been tough to objectively measure and define levels or scores of reusability.
The ability to reuse relies in an essential way on the ability to build larger things from smaller parts, and being able to identify commonalities among those parts. Reusability is often a required characteristic of platform software. Reusability brings several aspects to software development that do not need to be considered when reusability is not required.
Reusability implies some explicit management of build, packaging, distribution, installation, configuration, deployment, maintenance and upgrade issues. If these issues are not considered, software may appear to be reusable from design point of view, but will not be reused in practice.
Software reusability more specifically refers to design features of a software element (or collection of software elements) that enhance its suitability for reuse.
Many reuse design principles were developed at the WISR workshops.
Candidate design features for software reuse include:
Adaptable
Brief: small size
Consistency
Correctness
Extensibility
Fast
Flexible
Generic
Localization of volatile (changeable) design assumptions (David Parnas)
Modularity
Orthogonality
Parameterization
Simple: low complexity
Stability under changing requirements
Consensus has not yet been reached on this list on the relative importance of the entries nor on the issues which make each one important for a particular class of applications.
See also
Code reuse
References
Source code
Software quality |
40874 | https://en.wikipedia.org/wiki/Circuit%20switching | Circuit switching | Circuit switching is a method of implementing a telecommunications network in which two network nodes establish a dedicated communications channel (circuit) through the network before the nodes may communicate. The circuit guarantees the full bandwidth of the channel and remains connected for the duration of the communication session. The circuit functions as if the nodes were physically connected as with an electrical circuit. Circuit switching originated in analog telephone networks where the network created a dedicated circuit between two telephones for the duration of a telephone call. It contrasts with message switching and packet switching used in modern digital networks in which the trunklines between switching centers carry data between many different nodes in the form of data packets without dedicated circuits.
Description
The defining example of a circuit-switched network is the early analog telephone network. When a call is made from one telephone to another, switches within the telephone exchanges create a continuous wire circuit between the two telephones, for as long as the call lasts.
In circuit switching, the bit delay is constant during a connection (as opposed to packet switching, where packet queues may cause varying and potentially indefinitely long packet transfer delays). No circuit can be degraded by competing users because it is protected from use by other callers until the circuit is released and a new connection is set up. Even if no actual communication is taking place, the channel remains reserved and protected from competing users.
While circuit switching is commonly used for connecting voice circuits, the concept of a dedicated path persisting between two communicating parties or nodes can be extended to signal content other than voice. The advantage of using circuit switching is that it provides for continuous transfer without the overhead associated with packets, making maximal use of available bandwidth for that communication. One disadvantage is that it can be relatively inefficient because unused capacity guaranteed to a connection cannot be used by other connections on the same network. In addition, calls cannot be established or will be dropped if the circuit is broken.
The call
For call setup and control (and other administrative purposes), it is possible to use a separate dedicated signalling channel from the end node to the network. ISDN is one such service that uses a separate signalling channel while plain old telephone service (POTS) does not.
The method of establishing the connection and monitoring its progress and termination through the network may also utilize a separate control channel as in the case of links between telephone exchanges which use CCS7 packet-switched signalling protocol to communicate the call setup and control information and use TDM to transport the actual circuit data.
Early telephone exchanges were a suitable example of circuit switching. The subscriber would ask the operator to connect to another subscriber, whether on the same exchange or via an inter-exchange link and another operator. The result was a physical electrical connection between the two subscribers' telephones for the duration of the call. The copper wire used for the connection could not be used to carry other calls at the same time, even if the subscribers were in fact not talking and the line was silent.
Alternatives
In circuit switching, a route and its associated bandwidth is reserved from source to destination, making circuit switching relatively inefficient since capacity is reserved whether or not the connection is in continuous use. Circuit switching contrasts with message switching and packet switching. Both of these methods can make better use of available network bandwidth between multiple communication sessions under typical conditions in data communication networks.
Message switching routes messages in their entirety, one hop at a time, that is, store and forward of the entire message. Packet switching divides the data to be transmitted into packets transmitted through the network independently. Instead of being dedicated to one communication session at a time, network links are shared by packets from multiple competing communication sessions, resulting in the loss of the quality of service guarantees that are provided by circuit switching.
Packet switching can be based on connection-oriented communication or connection-less communication. That is, based on virtual circuits or datagrams.
Virtual circuits use packet switching technology that emulates circuit switching, in the sense that the connection is established before any packets are transferred, and packets are delivered in order.
Connection-less packet switching divides the data to be transmitted into packets, called datagrams, transmitted through the network independently. Each datagram is labeled with its destination and a sequence number for ordering related packets, precluding the need for a dedicated path to help the packet find its way to its destination. Each datagram is dispatched independently and each may be routed via a different path. At the destination, the original message is reordered based on the packet number to reproduce the original message. As a result, datagram packet switching networks do not require a circuit to be established and allow many pairs of nodes to communicate concurrently over the same channel.
Multiplexing multiple telecommunications connections over the same physical conductor has been possible for a long time, but each channel on the multiplexed link was either dedicated to one call at a time, or it was idle between calls.
Examples of circuit-switched networks
Public switched telephone network (PSTN)
B channel of ISDN
Circuit Switched Data (CSD) and High-Speed Circuit-Switched Data (HSCSD) service in cellular systems such as GSM
Datakit
X.21 (Used in the German DATEX-L and Scandinavian DATEX circuit switched data network)
Optical mesh network
See also
Clos network
Switching circuit theory
Time-driven switching
References
External links
Netheads vs Bellheads by Steve Steinberg
University of Virginia
RFC 3439 Some Internet Architectural Guidelines and Philopsophy
Teletraffic
Network architecture
Physical layer protocols |
23997010 | https://en.wikipedia.org/wiki/Malcolm%20Smith%20%28American%20football%29 | Malcolm Smith (American football) | Malcolm Xavier Smith (born July 5, 1989) is an American football linebacker for the Cleveland Browns of the National Football League (NFL). He played college football at USC. He was drafted by the Seattle Seahawks in the seventh round of the 2011 NFL Draft. Smith was named the Most Valuable Player of Super Bowl XLVIII after the Seahawks defeated the Denver Broncos.
High school career
Smith attended William Howard Taft High School, where he was a letterman in football and track. In football, he was named to the Student Sports Sophomore All-American and Cal-Hi Sports All-State first team as a 2004 sophomore when he had 800-plus yards of total offense and 8 touchdowns, plus 2 interceptions, as Taft won the L.A. City title. As a junior in 2005, he made Cal-Hi Sports All-State Underclass first team, All-L.A. City first team and Los Angeles Daily News All-Area first team while making 41 tackles, 2 sacks and 1 fumble recovery, plus running for 639 yards on 73 carries (8.8 avg.) with 10 touchdowns and catching 27 passes for 411 yards (15.2 avg.) with 7 scores as Taft was the L.A. City runnerup. In his final year in 2006, he had 31 tackles, 10 sacks, and four fumble recoveries at linebacker and ran for 919 yards on 118 carries with 15 touchdowns as a running back, despite missing the first half of the season with a leg injury.
Also a standout track athlete, Smith competed as a sprinter for the Taft High track & field team. He qualified for the Los Angeles City Section T&F Championships in the 100m and 200m dashes. He recorded a personal-best time of 10.8 seconds in the 100-meter dash as a junior, and got a PR of 22.39 seconds in the 200-meter dash as a senior. He also ran the 40-yard dash in 4.45 seconds.
Smith received scholarship offers for football from Notre Dame, California, Arizona, Michigan, and Penn State.
College career
Smith enrolled at the University of Southern California (USC) in order to play college football for the USC Trojans football team. As a true freshman in 2007, Smith played in all 13 games as a backup linebacker and special teams player. He finished the year with six tackles and a forced fumble.
As a sophomore in 2008 he again spent time as a backup and on special teams. He finished the season with 18 tackles in 13 games. In the 2009 game against cross-town rival UCLA Bruins, Smith led the Trojan defense for a 28–7 win. Smith returned the first UCLA interception 62 yards for a touchdown in the first quarter. For his play, Smith was awarded the "Legend Nike Game Ball" for the National Defensive Player of the Week.
Professional career
Smith entered the 2011 NFL Draft, but did not receive an invitation to perform at the NFL Scouting Combine in Indianapolis, Indiana. On March 30, 2011, Smith attended USC's pro day and performed all of the combine and positional drills for team representatives and scouts. He also attended private meetings with the Seattle Seahawks and Chicago Bears. Smith was projected to be a sixth or seventh round pick by NFL draft experts and scouts. At the conclusion of the pre-draft process, Smith was ranked as the 29th best outside linebacker prospect in the draft by DraftScout.com and was ranked the 37th best outside linebacker by Scouts Inc.
Seattle Seahawks
The Seattle Seahawks selected Smith in the seventh round (242nd overall) of the 2011 NFL Draft. Smith was the 30th linebacker drafted in 2011 and reunited with Seattle Seahawks' head coach Pete Carroll. Carroll was Smith's head coach at USC from 2007–2009.
2011
On July 28, 2011, the Seattle Seahawks signed Smith to a four-year, $2.08 million contract that includes a signing bonus of $45,900.
Throughout training camp, Smith competed to be a backup outside linebacker against K. J. Wright, Matt McCoy, and David Vobora. Head coach Pete Carroll named Smith the backup weakside linebacker to start the regular season, behind Leroy Hill.
He made his professional regular season debut in the Seattle Seahawks' season-opener at the San Francisco 49ers and made two combined tackles in their 33–17 loss. Smith made his first career regular season tackle on Ted Ginn Jr. and stopped him for a ten-yard loss during a punt return in the second quarter. On November 13, 2011, Smith collected a season-high four solo tackles, forced a fumble, and made his first career sack during a 22–17 victory against the Baltimore Ravens. Smith forced a fumble by David Reed that was recovered by teammate Atari Bigby and led to a last second field goal before the end of the second quarter. He also sacked Ravens' quarterback Joe Flacco for an eight-yard loss in the fourth quarter. Smith was inactive for the last two games of the regular season (Weeks 16–17). He finished his rookie season in 2011 with 16 combined tackles (ten solo), a forced fumble, and a sack while appearing in 12 games with zero starts. Smith received an overall grade of 52.1 from Pro Football Focus in 2012.
2012
During training camp, Smith competed for a roster spot as a backup outside linebacker against Jameson Konz, Allen Bradford, Heath Farwell, Mike Morgan, and Kyle Knox. Defensive coordinator Gus Bradley retained Smith as a backup outside linebacker, behind Leroy Hill and K. J. Wright, to begin the regular season. On December 2, 2012, Smith made his first career start and replaced Leroy Hill who was inactive due to an ankle injury. He finished the Seahawks' 23–17 win at the Chicago Bears with two solo tackles. In Week 16, he collected a season-high five combined tackles during a 42–13 victory against the San Francisco 49ers. Smith completed the 2012 season with 22 combined tackles (12 solo) and two pass deflections in 16 games and three starts. Smith was limited to 166 defensive snaps (16%), but played 258 snaps on special teams (60%). Smith earned an overall grade of 77.0 from Pro Football Focus in 2012.
The Seattle Seahawks finished second in the NFC West with an 11–5 record and earned a wildcard berth. On January 6, 2013, Smith appeared in his first career playoff game and recorded three solo tackles during the Seahawks' 24–14 win at the Washington Redskins in the NFC Wildcard Game. The following week, he made three solo tackles as the Seahawks lost the NFC Divisional Round 30–28 at the Atlanta Falcons.
2013
Smith entered training camp slated as the starting weakside linebacker after the departure of Leroy Hill. Smith received competition from Korey Toomer, Bruce Irvin, Mike Morgan, and Allen Bradford. Head coach Pete Carroll officially named Smith and K. J. Wright the starting outside linebackers to start the season, along with middle linebacker Bobby Wagner.
He started in the Seattle Seahawks' season-opener at the Carolina Panthers and made two solo tackles in their 12–7 victory. Smith was sidelined for the Seahawks' Week 3 victory against the Jacksonville Jaguars due to a hamstring injury. On October 17, 2013, Smith collected a season-high nine combined tackles, a pass deflection, and a sack during a 34–22 win at the Arizona Cardinals in Week 7. The following week, Smith was surpassed on the depth chart by Bruce Irvin who was named the starting strongside linebacker in his place. Smith became the starting weakside linebacker in Week 15 after K. J. Wright sustained a fractured foot and was sidelined for the last three games of the regular season. On December 22, 2013, Smith recorded eight combined tackles, a pass deflection, and made his first career interception during a 17–10 loss to the Arizona Cardinals in Week 16. Smith intercepted a pass attempt by Cardinals' quarterback Carson Palmer, that was originally intended for running back Andre Ellington, and returned it for a 32-yard gain in the second quarter. In Week 17, Smith made five combined tackles, a pass deflection, and returned an interception for his first career touchdown in a 27–9 win against the St. Louis Rams. He intercepted a pass by Kellen Clemens, that was originally thrown to tight end Lance Kendricks, and returned it for a 37-yard touchdown in the first quarter. Smith finished the 2013 season with 54 combined tackles (34 solo), four passes defensed, a forced fumble, one sack, an interception, and a touchdown in 15 games and eight starts. Smith completed the season with 480 defensive snaps (46%) and 228 snaps on special teams (51%). Pro Football Focus gave Smith an overall grade of 83.0 in 2013. His grade ranked 16th among the 56th qualifying linebackers in the league.
The Seattle Seahawks finished first in the NFC West with a 13–3 record and earned a first round bye and home-field advantage throughout the playoffs. On January 11, 2014, Smith started in his first career playoff game and made nine combined tackles during a 23–15 victory against the New Orleans Saints in the NFC Divisional Round. On January 19, 2014, Smith made four combined tackles, a pass deflection, and made an interception to seal the Seahawks' 23–17 victory against the San Francisco 49ers in the NFC Championship Game. He intercepted a pass by quarterback Colin Kaepernick, that was intended for wide receiver Michael Crabtree. The pass was deflected by Richard Sherman and caught in the endzone for a touchback by Smith with the Seahawks up by six points with 30 seconds remaining.
On February 2, 2014, Smith recorded ten combined tackles (six solo), deflected a pass, recovered a fumble, and returned an interception for a touchdown in the Seahawks' 43–8 victory against the Denver Broncos in Super Bowl XLVIII. Smith intercepted a pass by quarterback Peyton Manning, that was intended for running back Knowshon Moreno, and returned it for a 69-yard touchdown in the second quarter. He also recovered a fumble by wide receiver Demaryius Thomas in the third quarter after it was forced by teammate Byron Maxwell. His performance earned him the Super Bowl MVP award, making him the first defensive player to win the award since Dexter Jackson in Super Bowl XXXVII. Smith is one of seven defensive players to win Super Bowl MVP honors.
2014
Smith returned to a reserve role in 2014 after the return of K. J. Wright from injury. Defensive coordinator Dan Quinn opted to retain Bruce Irvin, K. J. Wright and Bobby Wagner as the starting linebacker trio to start the regular season. In Week 7, Smith earned his first start of the season after Bobby Wagner was inactive for five games (Weeks 7–11) due to a turf toe injury. He finished the Seahawks' 28–26 loss at the St. Louis Rams with a season-high ten solo tackles. Smith was inactive for two games (Weeks 9–10) due to a groin injury. Smith finished the 2014 season with 38 combined tackles (28 solo), two forced fumbles, and a pass deflection in 14 games and five starts. Smith played predominantly on special teams and finished the season with 255 snaps (59%) on special teams and 273 snaps (27%) on defense. Pro Football Focus gave Smith an overall grade of 37.3, which marked the lowest grade of his career.
The Seattle Seahawks finished first in the NFC West with a 12–4 record and secured a playoff berth. On January 18, 2015, the Seattle Seahawks played the Green Bay Packers in the NFC Championship Game after defeating the Carolina Panthers 31–17 in the NFC Divisional Round. Smith made two combined tackles as Seattle defeated the Packers 28-22. On February 1, 2015, Smith appeared in Super Bowl XLIX, but was held without a stat as the New England Patriots defeated the Seattle Seahawks 28–24.
Oakland Raiders
2015
On March 10, 2015, the Oakland Raiders signed Smith to a two-year, $7 million contract with $3.75 million guaranteed and a signing bonus of $2 million. Smith reunited with defensive coordinator Ken Norton Jr., who was his linebackers coach from 2007–2009 at USC and from 2010–2014 with the Seattle Seahawks.
Head coach and fellow USC alumnus Jack Del Rio named Smith the starting weakside linebacker to start the regular season, along with Ray-Ray Armstrong and starting middle linebacker Curtis Lofton. On October 25, 2015, Smith had a season-high 11 solo tackles, two pass deflections, a sack, and an interception in the Raiders' 37–29 victory at the San Diego Chargers. Smith intercepted a pass by quarterback Philip Rivers, that was originally intended for wide receiver Stevie Johnson, and returned it for a 27-yard gain during the Chargers' opening drive. In Week 16, Smith made a career-high 14 combined tackles (11 solo) during a 23–20 win against the San Diego Chargers. Smith started all 16 games in 2015 and made a career-high 122 combined tackles (99 solo), six passes defensed, and four sacks. He also had two forced fumbles and an interception. Smith earned an overall grade of 44.3 from Pro Football Focus, which ranked 60th among all qualifying linebackers in 2015.
2016
Head coach Jack Del Rio retained Smith as the starting weakside linebacker to start 2016, along with Bruce Irvin and middle linebacker Ben Heeney. Smith was inactive for the Raiders' Week 5 win against the San Diego Chargers due to a quadriceps injury. In Week 11, Smith collected ten combined tackles (nine solo), made a pass deflection, and intercepted a pass by Brock Osweiler during a 27–20 victory against the Houston Texans in Mexico City. On January 1, 2017, Smith recorded a season-high 12 combined tackles (nine solo) as the Raiders lost 24–6 at the Denver Broncos in Week 17. He finished the 2016 season with 103 combined tackles (86 solo), three pass deflections, two forced fumbles, and an interception in 15 games and 15 starts. Smith's performance suffered in 2016 and he earned an overall grade of 46.3 from Pro Football Focus. His grade ranked 68th among 88 qualifying linebackers during the season.
San Francisco 49ers
On March 9, 2017, the San Francisco 49ers signed Smith to a five-year, $26.50 million contract with $11.50 million guaranteed and a signing bonus of $7 million.
Smith entered training camp slated as the starting right outside linebacker, but saw competition from rookie first round pick Reuben Foster. On August 5, 2017, Smith injured his pectoral during a training camp practice held in Levi's Stadium. On August 7, 2017, the San Francisco 49ers officially placed Smith on injured reserve after an MRI determined he had suffered a torn pectoral muscle. He remained on injured reserve throughout the entire 2017 season.
In 2018, he appeared in 12 games, tallying 35 tackles (22 solo) and one pass defensed. On August 27, 2019, Smith was released by the 49ers.
Jacksonville Jaguars
On October 22, 2019, Smith was signed by the Jacksonville Jaguars. He appeared in 2 games as a backup. He was released on November 5.
Dallas Cowboys
On December 17, 2019, Smith was signed as a free agent by the Dallas Cowboys to provide depth because of injuries for the last 2 games, reuniting with his former defensive coordinator Kris Richard. He appeared in 2 games with one start, registering 5 tackles and one forced fumble.
Cleveland Browns
Smith signed with the Cleveland Browns on August 23, 2020. In Week 3 against the Washington Football Team, Smith recorded his first interception as a Brown during the 34–20 win. This was Smith's first interception since 2016. He was placed on the reserve/COVID-19 list by the Browns on December 31, 2020, and activated on January 9, 2021.
Smith re-signed with the Browns on March 18, 2021.
NFL career statistics
Regular season
Personal life
Smith's brother Steve Smith also attended USC from 2003 to 2006 and played wide receiver for the New York Giants, Philadelphia Eagles, and the St. Louis Rams.
Smith has achalasia, a rare disorder of esophagus which affects its ability to move food toward the stomach. It started to affect him around the time of the 2009 Rose Bowl, where he began losing a few pounds of body weight each week because food would get stuck in his esophagus and he would have to throw it up. The weight loss was a problem as Smith tried to keep his weight up to . Originally diagnosed as acid reflux, further tests revealed the problem as achalasia. Smith underwent a surgical procedure called a Heller myotomy which helped somewhat, but he still has dietary restrictions that force him to eat very slowly.
References
External links
Cleveland Browns bio
USC Trojans bio
1989 births
Living people
African-American players of American football
American football linebackers
Cleveland Browns players
Dallas Cowboys players
Jacksonville Jaguars players
Oakland Raiders players
People from Woodland Hills, Los Angeles
Players of American football from Los Angeles
San Francisco 49ers players
Seattle Seahawks players
Super Bowl MVPs
USC Trojans football players
William Howard Taft Charter High School alumni |
36963994 | https://en.wikipedia.org/wiki/Blast2GO | Blast2GO | Blast2GO, first published in 2005, is a bioinformatics software tool for the automatic, high-throughput functional annotation of novel sequence data (genes proteins). It makes use of the BLAST algorithm to identify similar sequences to then transfers existing functional annotation from yet characterised sequences to the novel one. The functional information is represented via the Gene Ontology (GO), a controlled vocabulary of functional attributes. The Gene Ontology, or GO, is a major bioinformatics initiative to unify the representation of gene and gene product attributes across all species.
See also
Protein function prediction
Functional genomics
Bioinformatics
References
External links
Blast2GO - Tool for functional annotation of (novel) sequences and the analysis of annotation data.
Company developing Blast2GO — BioBam Bioinformatics S.L., a bioinformatics company dedicated to creating user-friendly software for the scientific community is developing, maintaining and distributing Blast2GO.
Gene Ontology Tools — Provides access to the ontologies, software tools, annotated gene product lists, and reference documents describing the GO and its uses.
PlantRegMap—Plant GO annotation for 165 species and GO enrichment analysis
Bioinformatics algorithms
Bioinformatics software
Laboratory software
Public-domain software
Genomics
Omics |
9729203 | https://en.wikipedia.org/wiki/IT8 | IT8 | IT8 is a set of American National Standards Institute (ANSI) standards for color communications and control specifications. Formerly governed by the IT8 Committee, IT8 activities were merged with those of the Committee for Graphics Arts Technologies Standards (CGATS) in 1994.
Standards list
The following is a list of the IT8 standards, according to the NPES Standards Blue Book:
IT8.6 - 2002 - Graphic technology - Prepress digital data exchange - Diecutting data (DDES3)
This standard establishes a data exchange format to
enable transfer of numerical control information between diecutting systems and electronic prepress systems.
The information will typically consist of numerical control information used in the manufacture of dies. 37 pp.
IT8.7/1 - 1993 (R2003) - Graphic technology - Color transmission target for input scanner calibration
This standard defines an input test target that will allow any
color input scanner to be calibrated with any film dye set used to create the target. It is intended to address the color transparency products that
are generally used for input to the preparatory process for printing and publishing. This standard defines the layout and colorimetric values of
a target that can be manufactured on any positive color transparency film and that is intended for use in the calibration of a photographic
film/scanner combination. 32 pp.
IT8.7/2 - 1993 (R2003) Graphic technology - Color reflection target for input scanner calibration
This standard defines an input test target that will allow any
color input scanner to be calibrated with any film dye set used to create the target. It is intended to address the color photographic paper
products that are generally used for input to the preparatory process for printing and publishing. It defines the layout and colorimetric values of
the target that can be manufactured on any color photographic paper and is intended for use in the calibration of a photographic paper/scanner
combination. 29 pp.
IT8.7/3 - 1993 (R2003) Graphic technology - Input data for characterization of 4-color process printing
The purpose of this standard is to specify an input data
file, a measurement procedure and an output data format to characterize any four-color printing process. The output data (characterization) file
should be transferred with any four-color (cyan, magenta, yellow and black) halftone image files to enable a color transformation to be
undertaken when required. 29 pp.
Targets
Calibrating all devices involved in the process chain (original, scanner/digital camera, monitor/printer) is required for an authentic color reproduction, because their actual color spaces differ device-specifically from the reference color spaces.
An IT8 calibration is done with what are called IT8 targets, which are defined by the IT8 standards.
Example
Special targets, implementing the IT8.7/1 (transparent target) or IT8.7/2 (reflective target) standards, are needed for calibrating scanners. These targets consists of 24 grey fields and 264 color fields in 22 columns:
Column 01 to 12: HCL color model, which differ in Hue, Chroma, and Lightness
Column 13 to 16: CMYK-Colors Cyan, Magenta, Yellow, and Key (black) in different steps of brightness
Column 17 to 19: RGB-Colors Red, Green, and Blue in different steps of brightness
Column 20 to 22: undefined, producers' choice
After scanning such a target, an ICC profile gets calculated on the basis of reference values. This profile is used for all subsequent scans and assures color fidelity.
See also
List of colors
Color chart
Color calibration
Color management
Color mapping
ICC profile
References
Romano, Richard and Frank. (1998). The GATF encyclopedia of graphic communications. GATF Press.
Digital Color Imaging Handbook; Gaurav Sharma; ; 2002.
External links
IT8 target sources
EGM Laboratoris Color
Wolf Faust
SilverFast
IT8-enabled software
ExactScan Pro, Windows, Mac, Linux
SilverFast, Windows, Mac
VueScan, Windows, Mac, Linux
CoCa, Color camera calibrator, Windows, Linux
Rough Profiler, Mac
IT8 articles
Importance of Calibration & Characterization
Does Cost Really Make A Difference
Profiling a Camera with an IT8 Target
What is an IT8 color card? (In Spanish)
Computer graphics
American National Standards Institute standards
Printing terminology
Print production |
65118 | https://en.wikipedia.org/wiki/Poplog | Poplog | Poplog is an open source, reflective, incrementally compiled software development environment for the programming languages POP-11, Common Lisp, Prolog, and Standard ML, originally created in the UK for teaching and research in Artificial Intelligence at the University of Sussex, and later marketed as a commercial package for software development as well as for teaching and research. It was one of the initiatives supported for a while by the UK government-funded Alvey Programme.
History
After an incremental compiler for Prolog had been added to an implementation of POP-11, the name POPLOG was adopted, to reflect the fact that the expanded system supported programming in both languages. The name was retained, as a trade mark of the University of Sussex, when the system was later (mid 1980s) extended with incremental compilers for Common Lisp and Standard ML based on a set of tools for implementing new languages in the Poplog Virtual Machine. The user-accessible incremental-compiler tools that allow compilers for all these languages to be added also allow extensions to be made within a language to provide new powers that cannot be added using standard macros that merely allow new text to be equivalent to a longer portion of old text.
For some time after 1983, Poplog was sold and supported internationally as a commercial product, on behalf of the University of Sussex by Systems Designers Ltd (SDL), whose name changed as ownership changed. The main development work continued to be done by a small team at Sussex University until 1998, while marketing, sales, and support (except for UK academic users, who dealt directly with the Sussex team) was done by SDL and its successors (SD, then SD-Scicon then EDS) until 1991. At that time a management buy-out produced a spin-off company Integral Solutions Ltd (ISL), to sell and support Poplog in collaboration with Sussex University, who retained the rights to the name 'Poplog' and were responsible for the core software development while it was a commercial product. In 1992 ISL and Sussex University won a "Smart Award" in recognition of Poplog sales worth $5M.
ISL and its clients used Poplog for a number of development projects, especially ISL's data-mining system Clementine, mostly implemented in POP-11, using powerful graphical tools implemented also in POP-11 running on the X Window System. Clementine was so successful that in 1998 ISL was bought by SPSS Inc who had been selling the statistics and data-mining package SPSS for which they needed a better graphical interface suited to expert and non-expert users. SPSS did not wish to sell and support Poplog as such, so Poplog then became available as a free open source software package, hosted at the University of Birmingham, which had also been involved in development after 1991. Later IBM bought SPSS and Clementine is now marketed and supported as SPSS Modeler.
Supported languages
Poplog's core language is POP-11. It is used to implement the other languages, all of them incrementally compiled, with an integrated common editor. In the Linux/Unix versions, POP-11 provides support for 2-D graphics via X.
Poplog supports incrementally compiled versions of Common Lisp, POP-11, Prolog, and Standard ML. A separate package implemented by Robin Popplestone supports a version of Scheme.
Poplog has been used both for academic research and teaching in artificial intelligence and also to develop several commercial products, apart from Clementine. In 1992, ISL and Sussex University won an ICP Million Dollar award in recognition of Poplog exceeding sales of US$5 million.
Platforms
POP-11 was at first implemented on a DEC PDP-11 computer in 1976, and was ported to VAX/VMS in 1980. It became Poplog around 1982. Although the first commercial sales were for VAX/VMS, from the mid-1980s, the main Poplog development work was done on Sun SPARC computers running Solaris, although several different versions were sold, including versions for HP-UX and a 64-bit version of Poplog for DEC Alpha running Digital UNIX. After about 1999, when Poplog became available as free, open source, most development work was done on the Linux version, including porting to 64-bit Linux. A partial port to Mac OS X on PowerPC was done in 2005.
There is a version for Windows, originally developed to support Clementine, but the Unix/Linux graphical subsystem does not work on Windows Poplog. The Windows version of Clementine depended on a commercial package that supported X functionality on Windows.
There is also an open source project which aimed to produce a more platform neutral version of Poplog, including Windows. The most recent development by this project includes a web server component for integrating into Poplog applications, and the OpenPoplog Widget Collection for supporting client user interfaces running in a web browser. A more narrowly focused open source Poplog project, restricted to the 64-bit AMD64/X86-64 architecture was set up on GitHub by Waldek Hebisch: . This is now the basis of Poplog Version 16 hosted at the University of Birmingham .
Additional information about the history and features of Poplog can be found in the entries for POP-2 and POP-11. The chief architect of Poplog, responsible for many innovations related to making an incrementally compiled system portable, and providing support for a collection of languages was John Gibson, at Sussex University, though the earliest work was done by Steve Hardy. Chris Mellish helped with the initial Prolog implementation in POP-11. John Williams, working under supervision of Jonathan Cunningham implemented the Common Lisp subsystem. Robert Duncan and Simon Nichols added Standard ML. Between about 1980 and 1991, the project was managed by Aaron Sloman, until he went to the University of Birmingham, though he continued to collaborate with Sussex and ISL on Poplog development after that. Since 1999, he has been responsible for the main Poplog web site, as well as some of the extensions to be found there, listed under POP-11.
Implementation
The Prolog subset of Poplog is implemented using the extendable incremental compiler of POP-11, the core language of Poplog, which is a general purpose Lisp-like language with a more conventional syntax. The implementation required the Poplog Virtual Machine to be extended to provide support for Prolog continuations, Prolog variables, the Prolog trail (recording undoable variable bindings), and Prolog terms. The implementation was constrained by the need to allow data-structures to be shared with the other Poplog languages, especially POP-11 and Common Lisp, thereby providing support for a mixture of programming styles.
References
External links
The Free Poplog Portal
The online Poplog Eliza
Photo of ICP award Plaque
Details also available here:
Dynamic programming languages
Extensible syntax programming languages
History of computing in the United Kingdom
Stack-oriented programming languages
University of Sussex |
11944583 | https://en.wikipedia.org/wiki/TerrSet | TerrSet | TerrSet (formerly IDRISI) is an integrated geographic information system (GIS) and remote sensing software developed by Clark Labs at Clark University for the analysis and display of digital geospatial information. TerrSet is a PC grid-based system that offers tools for researchers and scientists engaged in analyzing earth system dynamics for effective and responsible decision making for environmental management, sustainable resource development and equitable resource allocation.
Key features of TerrSet include:
GIS analytical tools for basic and advanced spatial analysis, including tools for surface and statistical analysis, decision support, land change and prediction, and image time series analysis;
an image processing system with multiple hard and soft classifiers, including machine learning classifiers such as neural networks and classification tree analysis, as well as image segmentation for classification;
Land Change Modeler, a land planning and decision support toolset that addresses the complexities of land change analysis and land change prediction.
Habitat and Biodiversity Modeler, a modeling environment for habitat assessment and biodiversity modeling.
Ecosystem Services Modeler, a spatial decision support system for assessing the value of natural capital.
Earth Trends Modeler, an integrated suite of tools for the analysis of image time series(time series) to assess climate trends and impacts.
Climate Change Adaptation Modeler, a facility for modeling future climate and its impacts.
GeOSIRIS-REDD, a national-level REDD planning tool to assess deforestation, carbon emissions, agricultural revenue and carbon payments.
GeoMod, a land change modeling tool based around modeling unidirectional transitions between two land cover categories
History and background
TerrSet was first developed in 1987 by Prof. J. Ronald Eastman of Clark University, Department of Geography. Dr. Eastman continues to be the prime developer and chief architect of the software. The software was initially named after cartographer Muhammad al-Idrisi (1100–1166). In June 2020 Clark Labs released the TerrSet 2020 Geospatial Monitoring and Modeling software, version 19. Besides its primary research and scientific focus, TerrSet is popular as an academic tool for teaching the principal theories behind GIS at colleges and universities.
Since 1987 TerrSet has been used by professionals in a wide range of industries in more than 180 countries worldwide. In total, there are over 300 modules for the analysis and display of digital spatial information.
TerrSet is managed and updated by Clark Labs. Based within the Graduate School of Geography at Clark University, Clark Labs and its software tools are known for advancements in areas such as decision support, uncertainty management, classifier development, change and time series analysis, and dynamic modeling. Clark Labs partners with organizations such as the Gordon and Betty Moore Foundation, Google.org, USDA, the United Nations, Conservation International, Imazon and Wildlife Conservation Society.
References
External links
Clark Labs Tech Notes Blog
REDD Blog
Earth Trends Blog
IDRISI Canada Home Page
IDRISI Brazil Website
IDRISI Ecuador
GIS software
Remote sensing software |
8967069 | https://en.wikipedia.org/wiki/ViaMichelin | ViaMichelin | A wholly owned subsidiary of the Michelin Group, ViaMichelin designs, develops and markets digital travel assistance products and services for road users in Europe.
Launched in 2001 and drawing upon a century of Michelin experience in the publication of maps and guides, ViaMichelin provides services designed for both the general public and for business. The company uses its technological expertise to provide a complete service offering (maps, route plans, hotel and restaurant listings and traffic and tourist information, etc.), across a range of media including the Internet (www.viamichelin.co.uk), mobile phones, Personal Digital Assistants (PDAs) and GPS navigation systems, etc. ViaMichelin now employs 200 people and has locations in London, Frankfurt, Madrid, Milan and Paris.
ViaMichelin website
The ViaMichelin website provides mapping coverage for 7 million kilometers of roads and streets across more than 42 European countries.
The website is available in many languages, and its on-line hotel reservations service features more than 60,000 hotels across Europe. Visitors to the site gain access to an exclusive database of Michelin Guide content and listings including 18,000 tourist site recommendations and ratings for 62,000 hotels and restaurants, as well as additional travel services including traffic and weather updates, on-line car-hire booking and a database of speed camera locations, updated regularly and available to download free of charge. The website also features an online store offering electronic updates of the Michelin Guide and a range of GPS accessories, as well as navigation-related software (SD cards, CD-ROMs etc.) designed for third-party GPS navigation devices and PDAs. ViaMichelin Labs is a website used to improve and test new products like Michelin iPhone-specific maps.
Mobile Services
ViaMichelin services were available in the United Kingdom (02), France (Bouygues Telecom), Italy (Wind), Spain (Telefónica), Germany (E-Plus), Holland (Base) and Belgium via the i-mode portal. Users can access many services including automatic routing and travel-related address finder services (hotels, restaurants, petrol stations, etc.).
ViaMichelin Mobile services stopped offering mobile services in the beginning of 2007.
Software for PDAs
ViaMichelin also develops navigation software designed for PDAs providing PDA users with direct access to ViaMichelin’s route calculation and map display services, as well as comprehensive Michelin guide listings.
Navigation for PDAs
HP iPAQ rx1950 GPS Navigator / Tungsten E2 Navigation Companion / Palm TX GPS Navigation Companion
GPS Navigation
In October 2005, ViaMichelin launched its own portable GPS navigation systems that included Michelin Guide content and a range of additional location-based content including shops, petrol stations, service stations and safety camera locations. ViaMichelin’s traffic information service was also available to vehicle manufacturers. On 11 January 2008 ViaMichelin took the decision to cease production of its GPS range in order to focus on its core activities and services.
External links
ViaMichelin
iOS App Store
Software companies established in 2001
Michelin brands
Mobile route-planning software
Web Map Services
Android (operating system) software
IOS software |
11689003 | https://en.wikipedia.org/wiki/License%20proliferation | License proliferation | License proliferation is the phenomenon of an abundance of already existing and the continued creation of new software licenses for software and software packages in the FOSS ecosystem. License proliferation affects the whole FOSS ecosystem negatively by the burden of increasingly complex license selection, license interaction, and license compatibility considerations.
Impact
Often when a software developer would like to merge portions of different software programs they are unable to do so because the licenses are incompatible. When software under two different licenses can be merged into a larger software work, the licenses are said to be compatible. As the number of licenses increases, the probability that a free and open-source software (FOSS) developer will want to merge software that are available under incompatible licenses increases. There is also a greater cost to companies that wish to evaluate every FOSS license for software packages that they use. Strictly speaking, no one is in favor of license proliferation. Rather, the issue stems from the tendency for organizations to write new licenses in order to address real or perceived needs for their software releases.
License compatibility
License proliferation is especially a problem when licenses have only limited or complicated license compatibility relationships with other licenses. Therefore, some consider compatibility with the widely used GNU General Public License (GPL) an important characteristic, for instance David A. Wheeler as also the Free Software Foundation (FSF), who maintains a list of the licenses that are compatible with the GPL. On the other hand, some recommend Permissive licenses, instead of copyleft licenses, due to the better compatibility with more licenses.<ref>{{cite web |url=https://joinup.ec.europa.eu/software/page/licence_compatibility_and_interoperability |work=Open-Source Software - Develop, share, and reuse open source software for public administrations |title=Licence Compatibility and Interoperability |publisher=joinup.ec.europa.eu |quote=The licences for distributing free or open source software (FOSS) are divided in two families: permissive and copyleft. Permissive licences (BSD, MIT, X11, Apache, Zope) are generally compatible and interoperable with most other licences, tolerating to merge, combine or improve the covered code and to re-distribute it under many licences (including non-free or “proprietary”). |access-date=2015-05-30 |url-status=dead |archive-url=https://web.archive.org/web/20150617130550/https://joinup.ec.europa.eu/software/page/licence_compatibility_and_interoperability |archive-date=2015-06-17 }}</ref> The Apache Foundation for instance criticizes the fact that while the Apache License is compatible with the copyleft GPLv3, the GPLv3 is not compatible with the permissive Apache license — Apache software can be included in GPLv3 software but not vice versa. As another relevant example, the GPLv2 is by itself not compatible with the GPLv3. The 2007 released GPLv3 was criticized by several authors for adding another incompatible license in the FOSS ecosystem.
Vanity licenses
A vanity licenses is a license that is written by a company or person for no other reason than to write their own license ("NIH syndrome"). If a new license is created that has no obvious improvement or difference over another more common FOSS license it can often be criticized as a vanity license. As of 2008, many people create a custom new license for their newly released program, without knowing the requirements for a FOSS license and without realizing that using a nonstandard license can make that program almost useless to others.
Solution approaches
GitHub's stance
In July 2013, GitHub started a license selection wizard called choosealicense. GitHub's choosealicense frontpage offers as a quick selection only three licenses: the MIT License, the Apache License and the GNU General Public License. Some additional licenses are offered on subpages and via links. Following in 2015, approx. 77% of all licensed projects on GitHub were licensed under at least one of these three licenses.
Google's stance
From 2006 Google Code only accepted projects licensed under the following seven licenses:
Apache License 2.0
New BSD License
MIT License
GNU General Public License 2.0
GNU Lesser General Public License 2.1
Mozilla Public License 1.1
Artistic License/GPL dual-licensed (often used by the Perl community)
One year later, around 2008, the GNU General Public License 3.0 was added and strongly recommended together with the permissive Apache license, notably excluded was the AGPLv3 to reduce license proliferation.
In 2010, Google removed these restrictions, and announced that it would allow projects to use any OSI-approved license (see OSI's stance below), but with the limitation that public domain projects are only allowed as single case decision.
OSI's stance
Open Source Initiative (OSI) maintains a list of approved licenses. Early in its history, the OSI contributed to license proliferation by approving vanity and non-reusable licenses. In 2004 an OSI License Proliferation Project was started has prepared a License Proliferation Report in 2007. The report defined classes of licenses:
Licenses that are popular and widely used or with strong communities
International licenses
Special purpose licenses
Other/Miscellaneous licenses
Licenses that are redundant with more popular licenses
Non-reusable licenses
Superseded licenses
Licenses that have been voluntarily retired
Uncategorized Licenses
The group of "popular" licenses include nine licenses: Apache License 2.0, New BSD license, GPLv2, LGPLv2, MIT license, Mozilla Public License 1.1, Common Development and Distribution License, Common Public License, Eclipse Public License.
FSF's stance
Richard Stallman, former president of FSF, and Bradley M. Kuhn, former Executive Director, have argued against license proliferation since 2000, when they instituted the FSF license list, which urges developers to license their software under GPL-compatible free software license(s), though multiple GPL-incompatible free software licenses are listed with a comment stating that there is no problem using and/or working on a piece of software already under the licenses in question while also urging readers of the list not to use those licenses on software they write.
Ciarán O'Riordan of FSF Europe argues that the main thing that the FSF can do to prevent license proliferation is to reduce the reasons for making new licenses in the first place, in an editorial entitled How GPLv3 tackles license proliferation. Generally the FSF Europe consistently recommends the use of the GNU GPL as much as possible, and when that is not possible, to use GPL-compatible licenses.
Others
In 2005 Intel has voluntarily retracted their Intel Open Source License from the OSI list of open source licenses and has also ceased to use or recommend this license to reduce license proliferation.
The 451group created in June 2009 a proliferation report called The Myth of Open Source License Proliferation. A 2009 paper from the University of Washington School of Law titled Open Source License Proliferation: Helpful Diversity or Hopeless Confusion? called for three things as a solution: "A Wizzier Wizzard" (for license selection), "Best Practices and Legacy Licenses", "More Legal Services For Hackers". The OpenSource Software Collaboration Counseling (OSSCC) recommends, based on the originally nine recommended OSI licenses, five licenses: the Apache License 2.0, New BSD License, CDDL, MIT license, and to some degree the MPL, as they support collaboration, grant patent use and offer patent protection. Notably missing is the GPL as "this license cannot be used inside other works under a different license."''
See also
License compatibility
Rights Expression Language
References
turn.com
External links
Open source license proliferation, a broader view by Raymond Nimmer
Larry Rosen argues that different licenses can be a good thing Larry Rosen
Licensing howto by Eric S. Raymond
License proliferation for Medical Software by Fred Trotter Advocates that for Health Software, only the Google seven should be used.
How to choose a license for your own work Free Software Foundation
Proliferation
Licensing |
4905289 | https://en.wikipedia.org/wiki/Public%20health%20informatics | Public health informatics | Public health informatics has been defined as the systematic application of information and computer science and technology to public health practice, research, and learning. It is one of the subdomains of health informatics.
Definition
Public health informatics is defined as the use of computers, clinical guidelines, communication and information systems, which apply to vast majority of public health, related professions, such as nursing, clinical/ hospital care/ public health and medical research.
United States
In developed countries like the United States, public health informatics is practiced by individuals in public health agencies at the federal and state levels and in the larger local health jurisdictions. Additionally, research and training in public health informatics takes place at a variety of academic institutions.
At the federal Centers for Disease Control and Prevention in US states like Atlanta, Georgia, the Public Health Surveillance and Informatics Program Office (PHSIPO) focuses on advancing the state of information science and applies digital information technologies to aid in the detection and management of diseases and syndromes in individuals and populations.
The bulk of the work of public health informatics in the United States, as with public health generally, takes place at the state and local level, in the state departments of health and the county or parish departments of health. At a state health department the activities may include: collection and storage of vital statistics (birth and death records); collection of reports of communicable disease cases from doctors, hospitals, and laboratories, used for infectious disease surveillance; display of infectious disease statistics and trends; collection of child immunization and lead screening information; daily collection and analysis of emergency room data to detect early evidence of biological threats; collection of hospital capacity information to allow for planning of responses in case of emergencies. Each of these activities presents its own information processing challenge.
Collection of public health data
(TODO: describe CDC-provided DOS/desktop-based systems like TIMSS (TB), STDMIS (Sexually transmitted diseases); Epi-Info for epidemiology investigations; and others )
Since the beginning of the World Wide Web, public health agencies with sufficient information technology resources have been transitioning to web-based collection of public health data, and, more recently, to automated messaging of the same information. In the years roughly 2000 to 2005 the Centers for Disease Control and Prevention, under its National Electronic Disease Surveillance System (NEDSS), built and provided free to states a comprehensive web and message-based reporting system called the NEDSS Base System (NBS). Due to the funding being limited and it not being wise to have fiefdom-based systems, only a few states and larger counties have built their own versions of electronic disease surveillance systems, such as Pennsylvania's PA-NEDSS. These do not provide timely full intestate notification services causing an increase in disease rates versus the NEDSS federal product.
To promote interoperability, the CDC has encouraged the adoption in public health data exchange of several standard vocabularies and messaging formats from the health care world. The most prominent of these are: the Health Level 7 (HL7) standards for health care messaging; the LOINC system for encoding laboratory test and result information; and the Systematized Nomenclature of Medicine (SNOMED) vocabulary of health care concepts.
Since about 2005, the CDC has promoted the idea of the Public Health Information Network to facilitate the transmission of data from various partners in the health care industry and elsewhere (hospitals, clinical and environmental laboratories, doctors' practices, pharmacies) to local health agencies, then to state health agencies, and then to the CDC. At each stage the entity must be capable of receiving the data, storing it, aggregating it appropriately, and transmitting it to the next level. A typical example would be infectious disease data, which hospitals, labs, and doctors are legally required to report to local health agencies; local health agencies must report to their state public health department; and which the states must report in aggregate form to the CDC. Among other uses, the CDC publishes the Morbidity and Mortality Weekly Report (MMWR) based on these data acquired systematically from across the United States.
Major issues in the collection of public health data are: awareness of the need to report data; lack of resources of either the reporter or collector; lack of interoperability of data interchange formats, which can be at the purely syntactic or at the semantic level; variation in reporting requirements across the states, territories, and localities.
Public health informatics can be thought or divided into three categories.
Study models of different systems
The first category is to discover and study models of complex systems, such as disease transmission. This can be done through different types of data collections, such as hospital surveys, or electronic surveys submitted to the organization (such as the CDC). Transmission rates or disease incidence rates/surveillance can be obtained through government organizations, such as the CDC, or global organizations, such as WHO. Not only disease transmission/rates can be looked at. Public health informatics can also delve into people with/without health insurance and the rates at which they go to the doctor. Before the advent of the internet, public health data in the United States, like other healthcare and business data, were collected on paper forms and stored centrally at the relevant public health agency. If the data were to be computerized they required a distinct data entry process, were stored in the various file formats of the day and analyzed by mainframe computers using standard batch processing.
Storage of public health data
The second category is to find ways to improve the efficiency of different public health systems. This is done through various collections methods, storage of data and how the data is used to improve current health problems. In order to keep everything standardized, vocabulary and word usage needs to be consistent throughout all systems. Finding new ways to link together and share new data with current systems is important to keep everything up to date.
Storage of public health data shares the same data management issues as other industries. And like other industries, the details of how these issues play out are affected by the nature of the data being managed.
Due to the complexity and variability of public health data, like health care data generally, the issue of data modeling presents a particular challenge. While a generation ago flat data sets for statistical analysis were the norm, today's requirements of interoperability and integrated sets of data across the public health enterprise require more sophistication. The relational database is increasingly the norm in public health informatics. Designers and implementers of the many sets of data required for various public health purposes must find a workable balance between very complex and abstract data models such as HL7's Reference Information Model (RIM) or CDC's Public Health Logical Data Model, and simplistic, ad hoc models that untrained public health practitioners come up with and feel capable of working with.
Due to the variability of the incoming data to public health jurisdictions, data quality assurance is also a major issue.
Analysis of public health data
Finally, the last category can be thought as maintaining and enriching current systems and models to adapt to overflow of data and storing/sorting of this new data. This can be as simple as connecting directly to an electronic data collection source, such as health records from the hospital, or can go public information (CDC) about disease rates/transmission. Finding new algorithms that will sort through large quantities of data quickly and effectively is necessary as well.
The need to extract usable public health information from the mass of data available requires the public health informaticist to become familiar with a range of analysis tools, ranging from business intelligence tools to produce routine or ad hoc reports, to sophisticated statistical analysis tools such as DAP/SAS and PSPP/SPSS, to Geographical Information Systems (GIS) to expose the geographical dimension of public health trends. Such analyses usually require methods that appropriately secure the privacy of the health data. One approach is to separate the individually identifiable variables of the data from the rest
Applications in health surveillance and epidemiology
There are a few organizations out there that provide useful information for those professionals that want to be more involved in public health informatics. Such as the American Medical Informatics Association (AMIA). AMIA is for professions that are involved in health care, informatics research, biomedical research, including physicians, scientists, researchers, and students. The main goals of AMIA are to move from ‘bench to bedside’, help improve the impact of health innovations and advance the public health informatics field. They hold annual conferences, online classes and webinars, which are free to their members. There is also a career center specific for the biomedical and health informatics community.
Many jobs or fellowships in public health informatics are offered. The CDC (Center for Disease Control) has various fellowship programs, while multiple colleges/companies offer degree programs or training in this field.
For more information on these topics, follow the links below:
http://www.jhsph.edu/departments/health-policy-and-management/certificates/public-health-informatics/what-is-health-informatics.html
https://web.archive.org/web/20150406033743/http://www.phii.org/what-we-do
SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health.
Social media analytics
Since the late 2000s, data from social media websites such as Twitter and Facebook, as well as search engines such as Google and Bing, have been used extensively in detecting trends in public health.
References
Public Health Informatics and Information Systems by Patrick W. O’Carroll, William A. Yasnoff, M. Elizabeth Ward, Laura H. Ripp, Ernest L. Martin, D.A. Ross, A.R. Hinman, K. Saarlas, William H. Foege (Hardcover - Oct 16, 2002)
A Vision for More Effective Public Health Information Technology on SSRN
Olmeda, Christopher J. (2000). Information Technology in Systems of Care. Delfin Press.
https://www.fda.gov/fdac/features/596_info.html on FDA
Health Data Tools and Statistics
Public health |
5155132 | https://en.wikipedia.org/wiki/Stan%20Frankel | Stan Frankel | Stanley Phillips Frankel (1919 – May, 1978) was an American computer scientist. He worked in the Manhattan Project and developed various computers as a consultant.
Early life
He was born in Los Angeles, attended graduate school at the University of Rochester, received his PhD in physics from the University of California, Berkeley, and began his career as a post-doc student under J. Robert Oppenheimer at University of California, Berkeley in 1942.
Career
Frankel helped develop computational techniques used in the nuclear research taking place at the time, notably making some of the early calculations relating to the diffusion of neutrons in a critical assembly of uranium with Eldred Nelson. He joined the T (Theoretical) Division of the Manhattan Project at Los Alamos in 1943. His wife Mary Frankel was also hired to work as a human computer in the T Division. While at Los Alamos, Frankel and Nelson organized a group of scientists' wives, including Mary, to perform some of the repetitive calculations using Marchant and Friden desk calculators to divide the massive calculations required for the project. This became Group T-5 under New York University mathematician Donald Flanders when he arrived in the late summer of 1943.
Mathematician Dana Mitchell noticed that the Marchant calculators broke under heavy use and persuaded Frankel and Nelson to order IBM 601 punched card machines. This experience led to Frankel' interest in the then-dawning field of digital computers. In August 1945, Frankel and Nick Metropolis traveled to the Moore School of Engineering in Pennsylvania to learn how to program the ENIAC computer. That fall they helped design a calculation that would determine the likelihood of being able to develop a fusion weapon. Edward Teller used the ENIAC results to prepare a report in the spring of 1946 that answered this question in the affirmative.
After losing his security clearance (and thus his job) during the red scare of the early 1950s, Frankel became an independent computer consultant. He was responsible for designing the CONAC computer for the Continental Oil Company during 1954–1957 and the LGP-30 single-user desk computer in 1956, which was licensed from a computer he designed at Caltech called MINAC. The LGP-30 was moderately successful, selling over 500 units. He served as a consultant to Packard Bell Computer on the design of the PB-250 computer.
Later in his career, he became involved in the development of
desktop electronic calculators. The first calculator project he was involved in the development of was
the SCM Marchant Cogito 240 and 240SR electronic calculators introduced in 1965. In the interest of
improving upon the design of what became the SCM Cogito 240 and 240SR calculators, Frankel developed a new machine he
called NIC-NAC, which was based on a microcoded architecture. NIC-NAC was built in prototype form in his home as a proof-of-concept,
and the machine worked well. Due to its microcoded implementation, the machine was very efficient in terms the number
of components it required. Frankel, though his connections at SCM, was put in contact with Diehl, a West-German calculating
machine company well-known in Europe for its exquisitely designed electro-mechanical calculators. Diehl wanted to break into
the electronic calculator marketplace, but did not have the expertise itself. Frankel was contracted to develop a desktop
electronic calculator for Diehl, and moved to West Germany to undertake the project. The project resulted in a calculator
called the Diehl Combitron. The Combitron was a desktop printing electronic calculator that was also user programmable. The calculator utilized the concepts behind NIC-NAC's microcoded architecture, loading its microcode into a magnetostrictive delay line at
power-up via an internal punched stainless steel tape that contained the microcode. Another magnetostrictive delay line
contained the working registers, memory registers, and user program. The Combitron design was later augmented to
include the ability to attach external input/output devices, with this machine called the Combitron S. Frankel's
microcoded architecture would serve as the basis for a number of follow-on calculators developed and marketed by Diehl. SCM later became an OEM customer of Diehl, marketing the Combitron as the SCM Marchant 556PR.
Scientific papers
Frankel published a number of scientific papers throughout his career. Some of them explored the use of statistical sampling techniques and machine driven solutions. In a 1947 paper in Physical Review, he and Metropolis predicted the utility of computers in replacing manual integration with iterative summation as a problem solving technique. As head of a new Caltech digital computing group he worked with PhD candidate Berni Alder in 1949–1950 to develop what is now known as Monte Carlo analysis. They used techniques that Enrico Fermi had pioneered in the 1930s. Due to a lack of local computing resources, Frankel travelled to England in 1950 to run Alder's project on the Manchester Mark 1 computer. Unfortunately, Alder's thesis advisor was unimpressed, so Alder and Frankel delayed publication of their results until 1955, in the Journal of Chemical Physics. This left the major credit for the technique to a parallel project by a team including Teller and Metropolis who published similar work in the same journal in 1953.
In September, 1959, Frankel published a paper in IRE Transactions on Electronic Computers proposing a microwave computer that used travelling-wave tubes as digital storage devices, similar to, but faster than the acoustic delay lines used in the early 1950s. Frankel published a paper on measuring the thickness of soap films in the Journal of Applied Physics in 1966.
Publications
Frankel, S. Phillips, “Elementary Derivation of Thermal Diffusion”, Physical Review, Volume 57, Number 7, April 1, 1940, p. 661.
Frankel, S. and N Metropolis, “Calculations in the Liquid-Drop Model of Fission”, Physical Review, Volume 72, Number 10, November 15, 1947, p. 914–925.
Frankel, Stanley P., “Convergence Rates of Iterative Treatments of Partial Differential Equations”, Mathematical Tables and Other Aids to Computation, Volume 4, 1950, p. 65–75.
Frankel, S. P., “The Logical Design of a Simple General Purpose Computer”, IRE Transactions on Electronic Computers, March 1957, p. 5–14.
Frankel, S. P., “On the Minimum Logical Complexity Required for a General Purpose Computer”, IRE Transactions on Electronic Computers, December 1958, p. 282–284.
Frankel, Stanley P., “A Logic Design for a Microwave Computer”, IRE Transactions on Electronic Computers, September 1959, p. 271–276.
Frankel, Stanley P. and Karol J. Mysels, “On the ‘Dimpling’ During the Approach of Two Surfaces”, Journal of Physical Chemistry, Volume 66, January 1962, p. 190–191.
Frankel, Stanley P. and Karol J. Mysels, “Simplified Theory of Reflectometric Thickness Measurement of Structured Soap and Related Films”, Journal of Applied Physics, Volume 37, Number 10, September 1966, p. 3725–3728.
References
External links
Story of Stan P. Frankel, designer of the LGP-30, with photos.
Recirculating Memory Timing, filed February, 1964, issued June, 1970
Surely you're joking, Mr. Feynman! – R. Feynman recalled Frankel's contribution to Manhattan Project
1919 births
1978 deaths
Computer designers
20th-century American physicists
Manhattan Project people
University of California, Berkeley alumni
University of Rochester alumni |
5384260 | https://en.wikipedia.org/wiki/Eldon%20Bargewell | Eldon Bargewell | Major General Eldon Arthur Bargewell (August 13, 1947 – April 29, 2019) was a United States Army General officer. He served as commander of the U.S. Army's Delta Force unit.
Early life and education
Bargewell was born in Hoquiam, Washington and graduated from Hoquiam High School in 1965, enlisting in the U.S. Army in 1967. He completed the Special Forces Qualification Course in 1968. During the Vietnam War Bargewell was accepted into MACV-SOG where he served at the "Command And Control North (CCN)" Forward Operating Base 4 at Da Nang and served as Non-Commissioned Officer Team Leader for Reconnaissance Team "Viper" (all CCN teams were named for states or snakes). While serving with CCN, Bargewell earned the Distinguished Service Cross in September 1971 for his actions in combat in saving his team and getting them to safety.
Career
Bargewell graduated from Officer candidate School and received his commission in 1973. In addition, he completed a Bachelor of Science degree in resource management at Troy State University.
Bargewell's first assignment was as a member of the 2nd Battalion 75th Ranger Regiment at Fort Lewis, Washington, where he later served as rifle platoon leader and executive officer. As a captain, Bargewell was assigned as Rifle Company Commander with 2nd Battalion, 47th Infantry. In 1981 Bargewell volunteered for and completed a specialized selection and operator training course for assignment to Delta Force where he would serve as Operations Officer, Squadron Executive Officer, Troop commander, Squadron Commander (twice), Deputy Commander and unit commander from July 1996 to July 1998.
While in Delta Force Bargewell participated in Operation Acid Gambit during the invasion of Panama, including the daring rescue of American citizen Kurt Muse from the Modelo prison. After the successful extraction of the hostage the MH-6 Little Bird transporting Muse as well as several Operators crashed behind enemy lines wounding many of them; however they managed to seek cover in the city until they were recovered by an APC.
He commanded a Delta Force Squadron (A Squadron) during Operation Desert Storm in western Iraq. In 1998 Bargewell became Commanding General of Special Operations Command Europe, followed by assistant chief of staff for SFOR military operations in Sarajevo.
Bargewell returned to the continental United States and served as director of the center of operations, plans, and policies of United States Special Operations Command. In 2005, Bargewell became Director of Strategic Operations at Multinational Force Iraq. While serving as the Operations Officer Bargewell pursued an outside administrative investigation as to how knowledge of the Haditha incident in Iraq passed up the Marine chain of command and whether or not any commanders lied in their reports. The informal investigation, pursuant to Army regulation AR 15-6, began on March 19, 2006 and was expected to examine how servicemembers and their commanders were trained in the rules of engagement. The completed report was sent to Army Lt. Gen. Peter W. Chiarelli, the second-ranked US commander in Iraq, on the morning of June 15, 2006. This was separate from a criminal investigation being conducted by the Naval Criminal Investigative Service.
Distinguished Service Cross
Eldon A. Bargewell
General Orders: Headquarters, U.S. Army, Vietnam, General Orders No. 3391 (November 30, 1971)
Action Date: 27-Sep-71
Service: United States Army
Rank: Staff Sergeant
Company: Command and Control (North), TF 1, SOG
Regiment: 5th Special Forces Group (Airborne)
Division: 1st Special Forces Command (Airborne)
Citation:
The President of the United States of America, authorized by Act of Congress, July 9, 1918 (amended by act of July 25, 1963), takes pleasure in presenting the Distinguished Service Cross to Staff Sergeant Eldon A. Bargewell, United States Army, for extraordinary heroism in connection with military operations involving conflict with an armed hostile force in the Republic of Vietnam, while serving with Command and Control (North), Task Force 1, Studies and Observations Group, 5th Special Forces Group (Airborne), 1st Special Forces, attached to U.S. Army Vietnam Training Advisory Group (TF1AE), U.S. Army Vietnam Training Support Headquarters. Staff Sergeant Bargewell distinguished himself on 27 September 1971 while serving as a member of a long range reconnaissance team operating deep in enemy territory. On that date, his team came under attack by an estimated 75 to 100 man enemy force. Staff Sergeant Bargewell suffered multiple fragmentation wounds from an exploding B-40 rocket in the initial assault, but despite the serious wounds, placed a deadly volume of machine gun fire on the enemy line. As the enemy advanced, he succeeded in breaking the assault and forced them to withdraw with numerous casualties. When the enemy regrouped, they resumed their assault on the beleaguered team, placing a heavy volume of small arms and automatic weapons fire on Staff Sergeant Bargewell's sector of the defensive perimeter. Again he exposed himself to the enemy fire in order to hold his position and prevent the enemy from overrunning the small team. After breaking the enemy assault, the team withdrew to a nearby guard. At the landing zone, Staff sergeant Bargewell refused medical treatment in order to defend a sector of the perimeter, and insured the safe extraction of his team. Staff Sergeant Bargewell's extraordinary heroism and devotion to duty were in keeping with the highest traditions of the military service and reflect great credit upon himself, his unit, and the United States Army.
Death
Bargewell died near his home at the age of 71.
Awards and decorations
Bargewell was inducted into the U.S. Army Ranger Hall of Fame in 2011.
References
External links
"House to Look Into Probe of Pendleton Marines" by Tony Perry, LA Times
1947 births
2019 deaths
United States Army generals
Members of the United States Army Special Forces
Recipients of the Distinguished Service Cross (United States)
Recipients of the Legion of Merit
United States Army personnel of the Vietnam War
United States Army personnel of the Gulf War
People from Hoquiam, Washington
Military personnel from Washington (state)
Delta Force
United States Army Rangers
Accidental deaths in Alabama |
68891230 | https://en.wikipedia.org/wiki/Berwick%20Packet%20%281798%20ship%29 | Berwick Packet (1798 ship) | Berwick Packet was a smack launched at Berwick in 1798. She sailed for some years for the Old Ship Company, of Berwick in the packet trade between London and Berwick. After a change of ownership and homeport around 1806, Berwick Packet traded more widely. In 1808 she repelled an attack by a French privateer. Then in 1809 Berwick Packet served briefly as a transport in a naval campaign. She then returned to mercantile trade until she was wrecked in November 1827 on a voyage from the Baltic.
Career
Berwick Packet first appeared in Lloyd's Register (LR) in 1799.
Leith Packet was wrecked at "Sandhale" on 8 March 1807. She was on a voyage from Leith, to Hull. Five of her eight crew survived until 11 March, when Berwick Packet, Jameson, master, rescued them. All the crew had taken to her rigging, but the cook, the master, and his son died of exhaustion in the 33 hours before Berwick Packet rescued them. During the time of the crew's exposure, people on shore gathering what had washed ashore saw the crew's plight, but made no efforts to render assistance.
On 17 February 1808 Berwick Packet, Jameson, master, was off Dimlinton when a French privateer twice attempted to board her. She drove off the attack by firing a 12-pounder.
The Royal Navy hired Berwick Packet on 26 June 1809. She was one of 15 small transports that the Navy hired for the ill-fated Walcheren Campaign. Her commander was Lieutenant David Ewen Bartholomew. Her first assignment was to carry Congreve rockets from the Woolwich Arsenal to Walcheren. She participated in the capture of Flushing and was generally useful for the remainder of the campaign. The Navy returned Berwick Packet to her owners on 28 October. Berwick Packet was the only vessel of the 15 transports actually listed by name in the prize money notice.
Berwick Packet, Armstrong, master, arrived at Plymouth in November 1812 from Cadiz. She had developed a leak after having struck the Seven Stone, near Scilly. She was going to unload.
Fate
On 10 November 1827 Berwick Packet, Hughes, master, was driven ashore at Gothenburg, Sweden. She was on a voyage from Saint Petersburg to Leghorn. Most of the cargo was saved but the vessel herself was a wreck.
Notes, citations, and references
Notes
Citations
References
1798 ships
Packet (sea transport)
Age of Sail merchant ships of England
Maritime incidents in 1807
Maritime incidents in November 1827 |
34629249 | https://en.wikipedia.org/wiki/United%20States%20v.%20Nosal | United States v. Nosal | United States v. Nosal, 676 F.3d 854 (9th Cir. 2012) was a United States Court of Appeals for the Ninth Circuit decision dealing with the scope of criminal prosecutions of former employees under the Computer Fraud and Abuse Act (CFAA). The Ninth Circuit's first ruling (Nosal I) established that employees have not "exceeded authorization" for the purposes of the CFAA if they access a computer in a manner that violates the company's computer use policies—if they are authorized to access the computer and do not circumvent any protection mechanisms.
On April 24, 2013, U.S. Attorney Melinda Haag announced that Nosal was convicted by a federal jury of all charges contained in a six-count indictment. Nosal appealed his conviction to the Ninth Circuit. On July 5, 2016, a three-judge panel held 2-1 that Nosal had acted "without authorization" and affirmed his conviction. In this second decision (Nosal II), the Ninth Circuit attempted to clarify the meaning of "without authorization" in the context of the CFAA.
Background
In October 2004, David Nosal resigned from his position at Korn/Ferry, an executive search and recruiting company. As part of his separation agreement, Nosal agreed to serve as an independent contractor for Korn/Ferry and not to compete with them for one year; in exchange, Korn/Ferry agreed to compensate Nosal with two lump-sum payments and twelve monthly payments of $25,000. A few months after leaving Korn/Ferry, Nosal solicited three Korn/Ferry employees to help him start a competing executive search business. Before leaving the company, the employees downloaded a large volume of "highly confidential and proprietary" data from Korn/Ferry's computers, including source lists, names, and contact information for executives.
On June 26, 2008, Nosal and the three employees were indicted by the federal government on twenty counts of violations of the Computer Fraud and Abuse Act. The government alleged that the defendants "knowingly and with intent to defraud" exceeded authorized access to Korn/Ferry's computers.
Nosal appealed the indictment, claiming that the CFAA was "aimed primarily at computer hackers" and that it "does not cover employees who misappropriate information or who violate contractual confidentiality agreements". Nosal further argued that the employees were, in principal, permitted to access the information in their role as Korn/Ferry employees, and thus they did not "act without authorization" or "exceed authorized access" as written in Section (a)(4) of the CFAA.
After initially rejecting these arguments, the district court eventually agreed with Nosal and dismissed the five counts of the indictment arising from Section (a)(4). The government appealed this decision, arguing that Nosal and his accomplices did indeed exceed authorized access because they violated the company's computer access policies, which restricted the "use and disclosure of all [database] information, except for legitimate Korn/Ferry business".
Court case
The case was based heavily on the Ninth Circuit's interpretation of language in the CFAA statute, especially Section (a)(4), under which the more serious charges against the defendants stemmed.
Section (a)(4) of the CFAA makes liable anyone who "knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value." Neither party disputed that Nosal's accomplices were authorized to access Korn/Ferry computers, so the case hinged on whether or not they exceeded their authorized access when they downloaded the information for fraudulent purposes.
The Ninth Circuit Court relied on their earlier decision in LVRC Holdings v. Brekka, which centered on an employee who transferred business documents from his employer's computer to his personal email account and was later sued by the employer under a civil provision in the CFAA. In their ruling for that case, the court emphasized a distinction between the phrases "without authorization" and "exceeding authorized access" from CFAA Section (a)(4), and in so doing, provided an interpretation of the statutory language. They wrote, "an individual who is authorized to use a computer for certain purposes but goes beyond those limitations is considered by the CFAA as someone who has 'exceed[ed] authorized access.' On the other hand, a person who uses a computer 'without authorization' has no rights, limited or otherwise, to access the computer in question."
The court adopted this interpretation and expanded its scope, ruling that an employee "exceeds authorized access" under the CFAA when they use a computer in way that violates an employer's access restrictions—including policies governing how information on the computer may be used.
Regarding the question of how to determine when a violation occurs, the court rejected the approach used in International Airport Centers v. Citrin, which asserted that an employee loses authorization when he or she "violates a state law duty of loyalty because...the employee's actions [terminate] the employer-employee relationship 'and with it his [or her] authority to access the [computer]'".
Instead, the court cited their finding from Brekka that for purposes of the CFAA, it is the action of the employer that determines whether an employee is authorized to access the computer. They decided that, as a logical extension of this finding, the question of whether an employee "exceeds authorized access" is likewise determined by the employer's actions, including (but not limited to) the promulgation of computer use restrictions. Since Korn/Ferry indeed had such computer use restrictions, which the defendants violated when they accessed the executive database for fraudulent purposes, the Ninth Circuit court reversed the district court's decision and remanded the district court to reinstate the five counts under Section (a)(4).
Dissent
Judge Campbell dissented, arguing that the court's decision renders the CFAA's provisions unconstitutionally vague, since computer use policies are not written "with the definiteness or precision that would be required for a criminal statute" and they can be changed without notice. The ruling, she argued, places an undue burden on employees to stay current on such policies in order to protect themselves against possible criminal prosecution.
Impact and criticism
Nosal argued that the ruling would make criminals out of millions of employees who use their work computer to do trivial tasks such as checking basketball scores on the internet or reading personal email—behaviors that (technically) violate typical computer use policies. Many online law pundits expressed similar concerns, fearing that one could be prosecuted under federal law for violating a website's terms of service—for example, lying about one's age on Facebook.
The court defended its ruling, noting that such benign behaviors lack the requisite conditions of "intent to defraud" and "furthering fraud by obtaining something of value" as required for prosecution under CFAA Section (a)(4). However, other provisions in the CFAA do not include such requirements, so the current ruling may still admit prosecution of trivial behaviors that had previously been considered out of the scope of the CFAA.
Follow up
On October 27, 2011, the Ninth Circuit agreed to rehear the case en banc. The new case was presented in front of the entire Ninth Circuit panel on December 15, 2011 in San Francisco. The result of the hearing was published April 10, 2012 and states that the court chose a narrow interpretation of the CFAA, holding that the phrase
"exceeds authorized access" in the CFAA does not extend to violations of use restrictions.
See also
The Truth Behind the Nosal Case
LVRC Holdings LLC v. Brekka
International Airport Centers, L.L.C. v. Citrin
Lee v. PMSI, Inc.
EF Cultural Travel B.V. v. Zefer Corp., 318 F.3d 58, 63 (1st Cir. 2003)
United States v. Fiander, 547 F.3d 1036, 1041 n.3 (9th Cir. 2008)
United States v. Boren, 278 F.3d 911, 913 (9th Cir. 2002)
References
External references
Parties
David Nosal at Nosal Partners
Korn/Ferry International
Articles
List of documents related to CFAA
Electronic Frontier Foundation web page about the case
Shawn E. Tuma: "What does the CFAA mean and why should I care?" - A Primer on the Computer Fraud and Abuse Act for Civil Litigator
Dale C. Campbell: Seventh and Ninth circuits split on what constitutes without authorization within the meaning of the CFAA
En banc hearing
Nick Akerman's article of the en banc hearing on December 15th
Video recording of United States v Nosal en banc hearing.
Orin Kerr discussing the "en banc" hearing follow-up article by Kerr
Ninth Circuit Ruling Trimming CFAA Claims for Misappropriation Reminds Employers that Technical Network Security is the First Defense
2013
Nosal Convicted of Computer Fraud and Abuse Act Crime Despite His Ninth Circuit Win
Man Convicted of Hacking Despite Not Hacking
United States Court of Appeals for the Ninth Circuit cases
United States computer case law
United States Internet case law
2011 in United States case law |
40623623 | https://en.wikipedia.org/wiki/Computer-assisted%20interventions | Computer-assisted interventions | Computer-assisted interventions (CAI) is a field of research and practice, where medical interventions are supported by computer-based tools and methodologies. Examples include:
Medical robotics
Surgical and interventional navigation
Imaging and image processing methods for CAI
Clinical feasibility studies of computer-enhanced interventions
Tracked and guided biopsies
Alignment of pre-procedure images with the patient during the procedure
Intraoperative decision supports
Skill analysis and workflow studies in CAI
Clinical studies of CAI showing first-in-man or early efficacy results
User interfaces and visualization systems for CAI
Surgical and interventional systems
Novel surgical devices and sensors
User performance studies
Validation and evaluation of CAI technology
The basic paradigm of patient-specific interventional medicine is a closed loop process, consisting of
combining specific information about the patient with the physician's general knowledge to determine the patient's condition;
formulating a plan of action;
carrying out this plan; and
evaluating the results.
The experience gathered over many patients may be combined to improve treatment plans and protocols for future patients. Computer-based technology assists medical professional in processing and acting on complex information .
Methods
Medical robotics
Robotic and telerobotic interventions
Surgical and interventional navigation
Alignment of pre-procedure images with the patient during the procedure
Imaging and image processing methods for CAI
Intraoperative decision support
Surgical process modeling and analysis
In order to gain an explicit and formal understanding of surgery, the field of analyses and modelling of surgical procedures has recently emerged. The challenge is to support the surgeon and the surgical procedure through the understanding of Operating Room (OR) activities, with the help of sensor- or human-based systems. Related surgical models can then be introduced into a new generation of Computer-Assisted Interventions systems to improve the management of complex multimodal information, improve surgical workflows, increase surgical efficiency and the quality of care in the OR. Models created by these different approaches may have a large impact in future surgical innovations, whether for planning, intra-operative or post-operative purposes.
This idea of describing the surgical procedure as a sequence of tasks was first introduced by MacKenzie et al. (2001). and formalised in Jannin et al., 2001. The term Surgical Process (SP) has been defined as a set of one or more linked procedures or activities that collectively realise a surgical objective within the context of an organisational structure defining functional roles and relationships. This term is generally used to describe the steps involved in a surgical procedure. A Surgical Process Model (SPM) has been defined as a simplified pattern of an SP that reflects a predefined subset of interest of the SP in a formal or semi-formal representation. It relates to the performance of an SP with support from a workflow management system.
Surgical process models are described from observer based acquisition, or sensor-based acquisition (such as signals, or videos,).
Related terms: Surgical workflow analysis, ...
Surgical and interventional systems
Novel surgical devices and sensors
User Interface and ergonomics
Visualization systems for CAI
Validation and evaluation of CAI technology
Clinical studies of CAI showing first-in-man or early efficacy results
Clinical feasibility studies of computer-enhanced interventions
Applications
Skill analysis and workflow studies in CAI
Tracked and guided biopsies
CAI related scientific societies, conferences and journals
MICCAI
The Medical Image Computing and Computer Assisted Intervention Society (the MICCAI Society) is a professional association for medical image computing and computer-assisted medical interventions including biomedical imaging and robotics,
ISCAS
The International Society for Computer Assisted Surgery (ISCAS) is a non-profit association of practitioners of computer-aided surgery and related medical interventions
Its scope encompasses all fields within surgery, as well as biomedical imaging and instrumentation, and digital technology employed as an adjunct to imaging in diagnosis, therapeutics, and surgery.
SMIT
International conferences
MICCAI
MICCAI organizes an annual conference and associated workshops. Proceedings for this conference are published by Springer in the Lecture Notes in Computer Science series. General topics of the conference include medical image computing, computer-assisted intervention, guidance systems and robotics, visualization and virtual reality, computer-aided diagnosis, bioscience and biology applications, specific imaging systems, and new imaging applications.
IPCAI
International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) is a premiere international forum for technical innovations, system development and clinical studies in computer-assisted interventions. IPCAI includes papers presenting novel technical concepts, clinical needs and applications as well as hardware, software and systems and their validation.
CARS
The Computer Assisted Radiology and Surgery (CARS) congress is the CARS annual conference. Founded in 1985, CARS has focused on research and development on novel algorithms and systems and their applications in radiology and surgery. Its growth and impact is due to CARS's close collaboration with the ISCAS and EuroPACS societies, and CAR, CAD and CMI organizations.
See also
Information literacy
External links
International Society for Computer Assisted Surgery (ISCAS)
International Conference on Information Processing in Computer-Assisted Interventions (IPCAI)
The Computer Assisted Radiology and Surgery (CARS) congress
References
Health informatics |
15556177 | https://en.wikipedia.org/wiki/Danish%20UNIX%20User%20Group | Danish UNIX User Group | The Danish UNIX systems User Group (, DKUUG) is a computer user group around UNIX, which was the first Internet provider in Denmark and which created and maintained the .dk internet domain for Denmark. Founded 18 November 1983, DKUUG is a primary advisor on the Danish UNIX and Open Standards use. The group is active in the standards processes for UNIX, POSIX, the Internet, the World Wide Web, and Open Document Format.
History
The Danish UNIX User Group was founded on 18 November 1983 with the purpose of promoting UNIX and providing Internet access to the Danish academic community and the whole of Denmark. An offshoot of the EUUG, the DKUUG membership was originally 41 people from the Danish academic and business computing industry. Founder Keld Simonsen of the Datalogisk Institut at Copenhagen University served as group foreman from 1983 to 1997. It formed a commercial subsidiary, DKnet, organized as the Danish affiliate of the EUnet network.
In 1996, DKnet was purchased by the Danish PTT TeleDanmark in a private transaction for 20 million DKK.
During the 2000s, the organization has been the subject of internal disagreement and infighting among board members.
See also
.dk
References
Further reading
(Danish) Keld Simonsen, "En historie om Keld og DKUUG" (A History of Keld and DKUUG), dkuug.dk,
(Danish) Keld Simonsen, "DKUUG 30 år - Opstart og resultater" (DKUUG - 30 years - Creation and results) (2013-11-18)
External links
Official DKUUG website
1983 establishments in Denmark
Organizations established in 1983
User groups
sv:Unix time |
7151279 | https://en.wikipedia.org/wiki/Skyward | Skyward | Skyward is a software company specializing in K–12 school management and municipality management technologies, including student management, human resources, and financial management. Skyward is partnered with more than 1,900 school districts and municipalities worldwide.
Applications
Skyward applications are currently used by school districts and municipalities in 22 U.S. states and multiple international locations. Skyward's student information system and ERP solutions are designed to automate and simplify daily tasks in the areas of student management, financial management, and human resources.
Students' guardians use Skyward's Family Access product to stay up-to-date on students' grades, school schedules, food service accounts, and to communicate with teachers and other district staff. Students use Skyward's Student Access product to check their own grades and schedules, work on online assignments, and communicate with teachers.
History
1980–2000
Skyward was founded by Jim King in 1980 in Stevens Point, Wisconsin under the name Jim King and Associates. King worked as a subcontracted employee for a variety of businesses around Wisconsin, writing human resources and accounting software for IBM 5100 computers. In 1981, King wrote software for Merrill Area Public Schools, which was subsequently purchased by three other districts in Wisconsin.
In 1984, Jim King and Associates incorporated as a company and adopted the name School Administrative Software, Incorporated (SASI)
In 1988 and 1992, SASI opened offices in St. Cloud, Minnesota and Bloomington, Illinois, respectively. In 1994, SASI purchased Matrix Computers, a special education administration software company, and SASI changed their name to Skyward, Inc. In 1998 and 1999, Skyward opened offices in Lansing, Michigan and Indiana.
2000–2010
In 2001, Skyward partnered with the Washington School Information Processing Cooperative (WSIPC) to integrate into 297 districts throughout the state. In 2002, Skyward opened an office in Austin, Texas. In 2006, Skyward partnered with their first international customer, the American Embassy School in New Delhi, India.
In this time, Skyward expanded sales to Utah, Pennsylvania, New Jersey, New Mexico, Tennessee and Florida.
2010–2014
In 2011 the Texas Education Agency selected Skyward as a preferred vendor of student administrative software for Texas schools. In 2013, Rhode Island and Tennessee education departments both selected Skyward as a preferred vendor of student administrative software for their schools.
2014–present
In March, 2016, Skyward moved all corporate operations to its new world headquarters building in Stevens Point, WI.
Awards
2013, 2015 EdTech Digest Cool Tool Award
2017, 2018 Bubbler Award
References
External links
Official site
Software companies based in Wisconsin
Companies based in Wisconsin
Software companies of the United States |
53970875 | https://en.wikipedia.org/wiki/AnyDesk | AnyDesk | AnyDesk is a closed source remote desktop application distributed by AnyDesk Software GmbH. The proprietary software program provides platform independent remote access to personal computers and other devices running the host application. The software is currently installed on over 500 million devices across multiple platforms. It offers remote control, file transfer, and VPN functionality.
Company
AnyDesk Software GmbH was founded in 2014 in Stuttgart, Germany and has gone worldwide, with subsidiaries in the US and China.
In May 2018, AnyDesk secured 6.5 million Euros of funding in a Series A round led by EQT Ventures. Another round of investment in January 2020 brings AnyDesk to over 20 million Dollars of combined funding.
Software
AnyDesk uses a proprietary video codec "DeskRT" that is designed to allow users to experience higher-quality video and sound transmission while reducing the transmitted amount of data to the minimum.
With its three megabyte total program size, AnyDesk is noted as an especially lightweight application.
Features
Availability of features is dependent upon the license of the individual user. Some main features include:
Remote access for multiple platforms (Windows, Linux, macOS, iOS, Android, etc.)
File transfer and manager
Remote Print
VPN
Unattended access
Whiteboard
Auto-Discovery (automatic analysis of local network)
Chat-Function
REST-API
Custom-Clients
Session protocol
Two-Factor-Authentication
Individual host-server
Security
AnyDesk uses TLS-1.2 with authenticated encryption. Every connection between AnyDesk-Clients is secured with AES-256. When a direct network connection can be established, the session is endpoint encrypted and its data is not routed through AnyDesk servers. Additionally, whitelisting of incoming connections is possible.
Abuses
AnyDesk can be optionally installed on computers and smartphones with full administrative permissions, if the user chooses to do so. This provides the host user with full access to the guest computer over the Internet, and, like all remote desktop applications, is a severe security risk if connected to an untrusted host.
Mobile access fraud
In February 2019, Reserve Bank of India warned of an emerging digital banking fraud, explicitly mentioning AnyDesk as the attack channel. The general scam procedure is as follows: fraudsters get victims to download AnyDesk from the Google Play Store on their mobile phone, usually by mimicking the customer service of legitimate companies. Then, the scammers convince the victim to provide the nine-digit access code and to grant certain permissions. After permissions are obtained and if no other security measures are in place, the scammers usually transfer money using the Indian Unified Payment Interface. A similar scam took place in 2020 according to Kashmir Cyber police. The same method of theft is widely used internationally on either mobile phones or computers: a phone call convinces a person to allow connection to their device, typically from a caller claiming to be a service provider to "solve problems with the computer/phone", warning that Internet service will otherwise be disconnected, or from a caller claiming to be a financial institution because "there have been suspicious withdrawal attempts from your account".
Bundling with ransomware
In May 2018, the Japanese cybersecurity firm Trend Micro discovered that cybercriminals bundled a new ransomware variant with AnyDesk, possibly as an evasion tactic masking the true purpose of the ransomware while it performs its encryption routine.
Technical support scams
Scammers have been known to use AnyDesk and similar remote desktop software to obtain full access to the victims' computer by impersonating a technical support person. The victim is asked to download and install AnyDesk and provide the attackers with access. When access is obtained, the attackers can control the computer and move personal files and sensitive data.
In 2017, the UK based ISP TalkTalk banned Teamviewer and similar software from all its networks after scammers cold called victims and talked them into giving access to their computer. The software was removed from the blacklist after setting up a scam warning.
See also
Comparison of remote desktop software
Virtual Network Computing
References
External links
Software companies of Germany
Remote desktop
Remote administration software
Windows remote administration software
MacOS remote administration software
Linux remote administration software
Portable software
Proprietary cross-platform software
Virtual Network Computing
Web conferencing |
20005809 | https://en.wikipedia.org/wiki/Assessment%20in%20computer-supported%20collaborative%20learning | Assessment in computer-supported collaborative learning | Assessment in computer-supported collaborative learning (CSCL) environments is a subject of interest to educators and researchers. The assessment tools utilized in computer-supported collaborative learning settings are used to measure groups' knowledge
learning processes, the quality of groups' products and individuals' collaborative learning skills.
Perspective
Traditional assessment is equated with individualized exams and evaluations. However, in online collaborative learning, assessment requires a broader perspective as it encompasses the collaborative interactions using asynchronous and synchronous communications between group members. Assessment has been found to have a significant effect on CSCL by motivating learners through accountability and constructive feedback. It supports students in growing familiar with the course content through discourse and effectively encourages the participation of students.
Four metaphors of CLCS
There are four metaphors of Assessment of (Computer-Supported) Collaborative Learning such as:
a.acquisition metaphor
b.the participation metaphor
c. the knowledge creation metaphor
d.a sociocultural-based group cognition metaphor.
The usage of the acquisition metaphor during the learning process is directly connected to the accumulation of knowledge in the students`mind. Learning is evaluated based on the individual gain.
The participation metaphor emphasis that the learning process does not only happen in an individual or isolated environment but in an interactive socio-cultural environment where the students participate and collaborate with each-other.
The knowledge creation metaphor focuses mostly on the collaborative activities. There are cases when the individual activities are stressed as well in terms of students as individuals who collaborate and interact actively during the learning process.
A socio-cultural-based group cognition metaphor refers to the individuals`participation during the learning process who share meaning, ideas and opinions to the other members of the group.
Instructor's role in CSCL assessment
A paradigm shift occurs in the assessment of the products and processes in CSCL. In the traditional educational setting, final assessment is performed exclusively by the instructor.(p. 232) In CSCL, the instructor designs, facilitates, direct instruction and provide technical guidance. The participants take an active role in setting the standard criteria for assessing individual and group learning.
Intelligent Support For CSCL Assessment
The teacher`s assessment should include its function(summative or formative), type(peer assessment, portfolio`s, learning journals), format( rating scales,rubrics,feedback), focus(cognitive/social or motivational processes) and degree of student involvement(self,peer,co-.teacher assessment) are very essential.
Technology use in CSCL assessment
Various technologies may provide information that may be used for assessment purposes. For example, email, computer conferencing systems, bulletin boards, and hypermedia can be used as media for communication between group members in CSCL classrooms.(p. 13). This technology can be used to keep a record of the students' interactions. This interaction record enables the instructor and students in assessing a learner's participation and collaboration with the group.(p. 664).
Process assessment vs. product assessment
In CSCL settings, the relative value of the collaborative process and the product must be appropriately balanced. The pedagogical principle in CSCL environments is the assumption that knowledge is constructed through social negotiation and discussion with others. This social interaction encourages critical thinking and understanding.(p. 309) When learning occurs through social interaction, knowledge building can also be observed through text analysis or discourse analysis. One way to assess knowledge construction in online collaborative settings is by collecting and analyzing the discursive events recorded and kept as history in computer conferencing systems in the form of virtual artifact. Transcripts can be used to determine the quantity and quality of interactions in negotiating meaning of the course material.(p. 379) The instructor evaluates the messages exchanged by the students for meaningfulness and pertinence in regards to the target content.
Assessment of the process
Instructors can use discourse analysis to assess the students' learning of the collaborative process itself. The instructor can make use of the dialogue to look for cues of collaboration: support and respect in their criticisms, consideration of other teammates' opinions, negotiation of meaning, demonstration of mutual understanding, achievement of consensus, problem-solving, and time and task management issues. Another consideration in assessing students' collaborative skills, is the students' competence in online collaboration. As proficiency develops in progressive stages, the instructor can design the assessment to account for the students' developing competence in progressive steps throughout the online collaborative process.(p. 378)
Assessment of the product
Collaborative products can be used to assess learners' knowledge acquisition. The products can be: a concept map, a report, a research paper, an essay, a wiki, a website, etc. Two assessable elements of collaborative product are the overall quality of the collaborative product and the contributions of each individual.(p. 386) Each member of the group must participate in the collaborative activities. The products created by groups of students in CSCL contexts cannot be used as the sole evidence of knowledge acquisition. Although a quality product may be important, it is the process that generates the actual learning."(p. 170)
Self and peer assessment
Self assessment in CSCL, students taking responsibility of learning by evaluating and judging aspects of their own learning activity. In peer assessment, individuals take into account the quantity and quality of their own product or performance. In CSCL, these two types of assessments are dynamically interrelated. The aim of self and peer assessments in CSCL is to improve students' learning, and develop individual learning skills as well as grading individual learning outcomes.
Self and peer assessment:
drives students' learning;
aids students in recognizing individual potential and sharing collaborative work effectively and efficiently;
enables instructors to perceive the effect of individual learning through discourse;
informs the instructor of the students' opinion change and skill improvement that occurs through adversity in the online collaborative process;
develops students as retrospective thinkers.
Group work assessment
Group work assessment in CSCL measures the quantity and quality of students' learning as a team. Group work or teamwork is a collaborative learning situation in which students share the task of developing a product presented at the end of the course. Group work is not measured and interpreted independently but evaluated with other assessment tools, and plays a role in assisting learners' to reflect on their learning process.
Group work assessment:
diagnoses the collaborative learning process and shows the instructor what works and what does not;
identifies and corrects destructive conflict in collaborative learning;
facilitates learners' reflection making the collaborative task effective and efficient;
allows the instructor to monitor the collaborative learning processes and gather the information about individual performance and contributions to product quality.
E-portfolio assessment
E-portfolio can be used in assessment of CSCL activities by showing students' growth or proficiency in learning. Organizing information about individual students, the instructor keeps track of the student's learning process. These portfolios are managed online and are referred to as electronic portfolio, digital portfolio, or web-portfolio.
In using E-portfolios in CSCL assessment an instructor determines:
purpose;
type;
choice of items to include;
guidelines or criteria;
impact on the teacher and students;
self-reflection component.
Collaborative Learning
Collaborative Learning (CL) is a useful practice which is common at all levels of education. The latest developments in the field of Technology lead to a new discipline which is known as ComputerSupported Collaborative Learning (CSCL).The usage of computers during the Collaborative Learning (CL) established a shift in the methodology that the teachers used to have during their learning processes as well as an understanding of how group work affects individual and group cognition.
Assessment in Computer-supported collaborative learning (CSCL) environments is shaped by:
a.what is measured
b.its purpose.
The entire process distinguishes two important purposes:
1.formative
2.summative.
Summative assessment(assessment of learning) is described as individualistic and decontextualized which is realized in an isolated way from the learning process. It is used at the end of the course and its main purpose is to check the students`progression throughout the entire learning process. In other words how well the students performed. Summative assessment focuses mostly on the cognitive approaches which are used to educate the students. It is designed by the teacher who uses only a single performance score.
On the other hand,Formative assessment (assessment for learning) is contextualized and represents a picture of the learners` features. It is the most essential part of the learning process which is used by the teacher several times to evaluate the students` knowledge. Formative assessment is not used only at the end of the course. It comprises motivational, social and cognitive aspects of the learning process. Also, this type of assessment doesn`t use only a single score but creates a profile for each student.
Notes
References
Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal for Asynchronous Learning Networks, 5(2), 1-17.
Butler, S. M. & McMunn, N. D. (2006). A teacher's guide to classroom assessment: Understanding and using assessment to improve student learning. CA: Jossy-Bass, Inc. Press.
De Hoyos, M. L. C. (2004). Assessment of teamwork in higher education collaborative learning teams : a validation study. Retrieved from ProQuest Digital Dissertations. ATT 3150570.
Falchikov, N. (1986). Product comparisons and process benefits of collaborative peer group and self-assessment. Assessment and Evaluation in Higher Education, 11(2), 146-166.
Hinze-Hoare, V. (2007). CSCR: Computer supported collaborative research. United Kingdom: University of Southampton. Retrieved October 14, 2008 from
Knight, P. (1995). Assessment for learning in Higher Education. London: Kogan Page.
Lee, H. (2006). Students' perception on peer/ self-assessment in an online collaborative learning environment. Paper presented at the meeting of World Conference on Educational Multimedia, Hypermedia and Telecommunications (EDMEDIA) 2006, Chesapeake, VA.
McConnell, D. (2002). Collaborative assessment as a learning process in e-learning. The proceedings of Computer Support for Collaborative Learning: Foundations for a CSCL Community, 7(11), 566-567.
Mcdonald, J. (2003). Assessing online collaborative learning: Process and product. Computers & Education, 40(4), 377-391.
Pozzi, F., Manca, S., Persico, D., & Sarti, L. (2007). A general framework for tracking and analyzing learning processes in computer-supported collaborative learning environments. Innovations in Education and Teaching International, 44(2), 169-179.
Suther, D. (2006). Technology: Affordances for inter subjective learning: A thematic agenda for CSCL. International Journal of Computer-Supported Collaborative Learning, 1(3), 662-671.
Swan, K., Shen, J., & Hiltz, S. (2006). Assessment and collaboration in online learning. Journal of Asynchronous Learning Networks, 10(1), 44-61.
Student assessment and evaluation
Computer-based testing |
47507079 | https://en.wikipedia.org/wiki/Islamic%20State%20Hacking%20Division | Islamic State Hacking Division | The Islamic State Hacking Division (ISHD) or The United Cyber Caliphate (UCC) is a merger of several hacker groups self-identifying as the digital army for the Islamic State of Iraq and Levant (ISIS/ISIL). The unified organization comprises at least four distinct groups, including the Ghost Caliphate Section, Sons Caliphate Army (SCA), Caliphate Cyber Army (CCA), and the Kalashnikov E-Security Team. Other groups potentially involved with the United Cyber Caliphate are the Pro-ISIS Media group Rabitat Al-Ansar (League of Supporters) and the Islamic Cyber Army (ICA). Evidence does not support the direct involvement of the Islamic State leadership. It suggests external and independent coordination of Pro-ISIS cyber campaigns under the United Cyber Caliphate (UCC )name. Investigations also display alleged links to Russian Intelligence group, APT28, using the name as a guise to wage war against western nations.
Concerns
The group's actions have included online recruiting, website defacement, social media hacks, denial-of-service attacks, and doxing with 'kill lists.' The group is classified as low-threat and inexperienced because their history of attacks requires a low level of sophistication and rely on publicly available hacking tools.
Experts raised doubts about the source and nature of data from released 'kill lists' containing personal information about U.S. Military personnel claimed stolen from hacked U.S. government servers. There is no evidence that the United Cyber Caliphate (UCC) compromised U.S. systems. The data included public, unclassified, and often outdated information about civilians, non-U.S. citizens, and others built from old data breaches or web scraped data.
U.S., French, and German intelligence Investigated attacks following the French Television Channel TV5Monde hack and The U.S. CENTCOM Twitter attack. All three countries linked actions by the United Cyber Caliphate (UCC) to APT 28, a Russian intelligence group.
History
The group first emerged in hacking operations against U.S. websites in January 2015 as the Cyber Caliphate Army (CCA). In March 2015, the Islamic State published a "kill list" on a website that included names, ranks, and addresses of 100 U.S. military members.
A pattern of similar attacks emerged after the media coverage. At least 19 individual 'kill lists,' including personal information of American, Canadian, and European citizens released between March 2015 and June 2016. On April 4, 2016, all four groups united as the United Cyber Caliphate (UCC).
In June 2016, the Middle East Media Research Institute found and revealed to the media an alleged list of approximately 8,300 people around the world as potential lone-wolf attack targets.
Successful attacks since mid-2014
Australian airport website defaced.
French TV5Monde live feed hacked, social media hacked and defaced with the message "Je Suis ISIS". French investigators later discounted this, instead suspecting the involvement of a hacking group, APT28, allegedly linked to the Russian government.
ISIS hacks Swedish radio station and broadcasts recruitment song
United States' military database hacked in early August and data pertaining to approximately 1400 personnel posted online.
Top secret British government emails hacked. The emails pertained to top cabinet ministers. The intrusion was detected by GCHQ.
February 28, 2016, Caliphate Cyber Army (CCA) carried out the bizarre hack on the website of Solar UK, a company in the historic town of Battle, England. Customers were being diverted to a web page featuring the Isis logo accompanied by a string of chilling threats. “Fear us,” the page warned. “We are the Islamic Cyber Army.
On April 15, 2016 (Friday), Islamic State hackers under the name UCC successfully hacked 20 Australian websites in a coordinated attack on Australian business. Some of the websites redirected to the website containing their content.
In early April 2017, UCC released a kill list of 8,786 people.
In mid 2019, Islamic State affiliated hacking group hijacked 150 targeted Twitter handles using an unknown vulnerability.
References
Hacker groups
Islamic State of Iraq and the Levant |
457692 | https://en.wikipedia.org/wiki/Martin%20Galway | Martin Galway | Martin Galway (born 3 January 1966, Belfast, Northern Ireland) is one of the best known composers of chiptune video game music for the Commodore 64 sound chip, the SID soundchip, and for the Sinclair ZX Spectrum. His works include Rambo: First Blood Part II, Comic Bakery and Wizballs scores, as well as the music used in the loader for the C64 version of Arkanoid.
Career
Galway was the first musician to get published with sampled sounds on the Commodore, with the theme for the Arkanoid conversion. When asked about how he did it, he answered:
I figured out how samples were played by hacking into someone else's code... OK, I admit it... It was a drum synthesizer package called Digidrums, actually, so you could still say I was the first to include samples in a piece of music. ... Never would I claim to have invented that technique, I just got it published first. In fact, I couldn't really figure out where they got the sample data, just that they were wiggling the volume register, so I tried to make up my own drum sample sounds in realtime – which is the flatulence stuff that shipped in Arkanoid. ... After the project was in the shops I gained access to some real drum samples, and I slid those into my own custom version of the tune. The one that's in the shops is kind of a collage of farts & burps, don't you think?... Later I was able to acquire some proper drum samples and by Game Over it got quite sophisticated.
Galway was appointed as Audio Director at Origin Systems in 1990. He worked at Digital Anvil from 1996.
Galway's most recent post was working as Audio Director for Cloud Imperium Games on their upcoming PC game Star Citizen, created by Chris Roberts of Wing Commander. Star Citizen was expected to release Q1 2015. Galway has since left this post.
Video game music
Atomic Protector (Optima Software, 1983)
Cookie (Ultimate Play the Game, 1983. An unreleased BBC Micro conversion, unearthed in 2002)
Daley Thompson's Decathlon (Ocean Software, 1984, includes a chiptune cover of Yellow Magic Orchestra's "Rydeen")
Swag (Micromania, 1984)
Yie Ar Kung-Fu (Includes a remix of Jean-Michel Jarre's "Les Chants Magnétiques part IV", Imagine, 1985)
Hyper Sports (Imagine, 1985)
Kong Strikes Back! (The first C64 song ever [composed in 1984] to use arpeggio which soon became an essential part of C64 sound, Ocean, 1985)
The Neverending Story (Ocean, 1985)
Ocean Loader 1 & 2 (The two different songs were used in several games released by Ocean, playing during the loading sequence of the game. Ocean Loaders 3 to 5 were composed by Peter Clarke and Jonathan Dunn) (Ocean, 1985)
Roland's Ratrace (Ocean, 1985)
Mikie (Imagine, 1986, like the arcade game, this includes the arrangements of The Beatles songs "Twist and Shout", "A Hard Day's Night")
Ping Pong (Imagine, 1986, ZX Spectrum and C64 conversions)
Comic Bakery (Imagine, 1986)
Stryker's Run (Superior Software, 1986, includes a chiptune cover of Yellow Magic Orchestra's "Rydeen")
Terra Cresta (Imagine, 1986)
Green Beret (Imagine, 1986)
Helikopter Jagd (Ocean, 1986)
Highlander (Ocean, 1986)
Hunchback II (Ocean, 1986)
Match Day (Ocean, 1986)
Miami Vice (Ocean, 1986)
Parallax (Ocean, 1986)
Rambo: First Blood Part II (Ocean, 1986)
Short Circuit (Ocean, 1986, contains the cover of "Who's Johnny" by El DeBarge)
Arkanoid (Imagine, 1987)
Athena (Imagine, 1987)
Game Over (Imagine, 1987)
Rastan (Imagine, 1987)
Slap Fight (Imagine, 1987)
Yie Ar Kung-Fu II (Imagine, 1987)
Combat School (Ocean, 1987)
Crazee Rider (Superior Software, 1987)
Wizball (Ocean, 1987)
MicroProse Soccer (MicroProse, 1988)
Times of Lore (Origin, 1988)
Insects in Space (Sensible Software, 1989)
Wing Commander 2: Vengeance of the Kilrathi (Origin, 1991)
Ultima VII: The Black Gate (Origin, 1992)
Ultima Underworld: The Stygian Abyss (Origin, 1992)
Strike Commander (Electronic Arts/Origin, 1993)
Wing Commander 4: The Price of Freedom (Electronic Arts/Origin, 1995)
The Kilrathi Saga (Electronic Arts, 1996)
Starlancer (Digital Anvil/Microsoft, 2000)
Notes
References
High Voltage SID Collection
Martin Galway STIL
External links
Artist profile at OverClocked ReMix
Martin Galway's Music at CVGM
Information about Martin Galway on Certain Affinity web page
Legends of the C64 article on Martin Galway
Remix64 Interview Sánchez, Claudio (10 July 2003)
Remix64 Interview Carr, Neil (28 March 2001)
1966 births
Living people
Chiptune musicians
Commodore 64 music
Composers from Northern Ireland
Musicians from Belfast
Origin Systems people
Sensible Software
Video game composers
People educated at Parrs Wood High School |
1956394 | https://en.wikipedia.org/wiki/Internet%20Explorer%205 | Internet Explorer 5 | Microsoft Internet Explorer 5 (IE5) is a graphical web browser, the fifth version of Internet Explorer, the successor to Internet Explorer 4 and one of the main participants of the first browser war. Its distribution methods and Windows integration were involved in the United States v. Microsoft Corp. case. Launched on March 18, 1999, it was the default browser in Windows 98 SE, Windows 2000 and Windows Me (later default was Internet Explorer 6) and can replace previous versions of Internet Explorer on Windows 3.1x, Windows NT 3.x, Windows 95, Windows NT 4.0 and Windows 98 First Edition. Although Internet Explorer 5 ran only on Windows, its siblings Internet Explorer for Mac 5 and Internet Explorer for UNIX 5 supported Mac OS X, Solaris and HP-UX.
IE5 presided over a large market share increase over Netscape Navigator between 1999 and 2001, and offered many advanced features for its day. In addition, it was compatible with the largest range of OSes of all the IE versions. However, support for many OSes quickly dropped off with later patches, and Windows XP and later Windows versions are not supported, because of inclusion of later IE versions. The 1999 review in PC World noted, "Credit the never-ending game of browser one-upsmanship that Netscape and Microsoft play. The new IE 5 trumps Netscape Communicator with smarter searching and accelerated browsing."
IE5 attained over 50% market share by early 2000, taking the lead over other browser versions including IE4 and Netscape. 5.x versions attained over 80% market share by the release of IE6 in August 2001. 5.0x and 5.5 were surpassed by Internet Explorer 6.0, dropping it to the second most popular browser, with market share dropping to 34 percent by mid-2003. In addition, Firefox 1.0 had overtaken it in market share by early 2005. Market share of IE5 fell below 1% by the end of 2006, right when Internet Explorer 7 was released.
Microsoft spent over US$100 million a year in the late 1990s, with over 1000 people working on IE by 1999 during the development of IE5.
The rendering behavior of Internet Explorer 5.x lives on in other browsers' quirks modes. Internet Explorer 5 is no longer available for download from Microsoft. However, archived versions of the software can be found on various websites.
Internet Explorer 5 is the final version of Internet Explorer which supports Windows 3.1x, Windows NT 3.x, Windows 95 and all Windows NT 4.0 versions newer than SP2, but except SP6a. The next version, Internet Explorer 6, only supports Windows NT 4.0 SP6a or later.
History
The actual release of Internet Explorer 5 happened in three stages. First, a Developer Preview was released in June 1998 (5.0B1), and then a Public Preview was released in November 1998 (5.0B2). Then in March 1999 the final release was released (5.0). In September it was released with Windows 98 Second Edition. Version 5.01, a bug fix version, was released in December 1999. Windows 2000 includes this version. Version 5.0 was the last one to be released for Windows 3.1x or Windows NT 3.x. Internet Explorer 5 Macintosh Edition had been released a few months earlier on March 27, 2000, and was the last version of Internet Explorer to be released on a non-Windows platform. Version 5.5 for Windows was released in June 2000, bundled with Windows ME and 128-bit encryption. It dropped support for several older Windows versions.
A 1999 review of IE5 by Paul Thurrott described IE5 in ways such as, "Think of IE 5.0 as IE 4.0 done right: All of the rough areas have been smoothed out..", "....comes optionally bundled with a full suite of Internet applications that many people are going to find irresistible.", "IE 5.0 is a world-class suite of Internet applications."
Microsoft ended all support for Internet Explorer 5.5, including security updates, on December 31, 2005. Microsoft continued to support Internet Explorer 5.01 on Windows 2000 SP4, according to its Support Lifecycle Policy; however, as with Windows 2000, this support was ended on July 13, 2010.
Overview
Version 5.0, launched on March 18, 1999, and subsequently included with Windows 98 Second Edition and bundled with Microsoft Office 2000, was a significant release that supported bi-directional text, ruby characters, XML, XSLT and the ability to save web pages in MHTML format. There was enhanced support for CSS Level 1 and 2, and a side bar for web searches was introduced, allowing quick jumps throughout results.
The first release of Windows 98 in 1998 had included IE4. However, Internet Explorer 5 incorrectly includes the padding and borders within a specified width or height; this results in a narrower or shorter rendering of a box. The bug was fixed in Internet Explorer 6 when running in standards-compliant mode.
With the release of Internet Explorer 5.0, Microsoft released the first version of XMLHttpRequest (XHR), giving birth to Ajax (even though the term "Ajax" was not coined until years later.) XMLHttpRequest is an API that can be used by JavaScript, and other Web browser scripting languages to transfer XML and other text data between a page's client side and server side, and was available since the introduction of Internet Explorer 5.0 and is accessible via JScript, VBScript and other scripting languages supported by IE browsers. Windows Script Host was also installed with IE5, although later on viruses and malware would attempt to use this ability as an exploit, which resulted pressure to disable it for security reasons. Smart Offline Favorites feature was added to the Active Desktop component introduced in IE4.
An "HTML Application" (HTA) is a Microsoft Windows application written with HTML and Dynamic HTML and introduced with IE5. Internet Explorer 5.0 also introduced favicon support and Windows Script Host, which provides scripting capabilities comparable to batch files, but with a greater range of supported features.
Version 5.5 followed in June 2000. First released to developers at the 2000 Professional Developers Conference in Orlando, Florida, then made available for download, version 5.5 focused on improved print preview capabilities, CSS and HTML standards support, and developer APIs; this version was bundled with Windows ME. Version 5.5 also includes support for 128-bit encryption. Although it is no longer available for download from Microsoft directly it can also be installed with MSN Explorer 6.0 as msnsetup_full.exe. The full version of MSN Explorer can be downloaded only if you use Windows 95, Windows NT 4.0, Windows 98, Windows 98 SE and Windows 2000 if Internet Explorer 5.5 has not yet been installed. The full version will work on also Windows ME and Windows XP but you will need to download it on Windows 2000 or earlier and transfer the setup file to the newer operating system. If you still want to download it on a newer operating system the only way is to use an outdated web browser such as Netscape 4.8.
Although newer browsers have been released, IE5 rendering mode continues to have an impact, as a 2008 Ars Technica article notes:
IE5.5 (and below) was decidedly nonstandard in its rendering behavior. Hundreds of millions of web pages were written to look "right" in IE5.5's broken rendering. The result was something of a quandary for Microsoft when it came to release IE6. They wanted to improve the standards conformance in IE6, but could not afford to break pages dependent on the older behavior.
The solution was the "doctype switch". The doctype switch allowed IE6 to support both the old IE5.5 behavior—"quirks mode"—and new, more standards-conforming behavior—"standards mode."
United States v. Microsoft Corp.
On April 3, 2000, Judge Jackson issued his findings of fact that Microsoft had abused its monopoly position by attempting to "dissuade Netscape from developing Navigator as a platform", that it "withheld crucial technical information", and attempted to reduce Navigator's usage share by "giving Internet Explorer away and rewarding firms that helped build its usage share" and "excluding Navigator from important distribution channels".
Jackson also released a remedy that suggested Microsoft should be broken up into two companies. This remedy was overturned on appeal, amidst charges that Jackson had revealed a bias against Microsoft in communication with reporters. The findings of fact that Microsoft had broken the law, however, were upheld. The Department of Justice announced on September 6, 2001 that it was no longer seeking to break up Microsoft and would instead seek a lesser antitrust penalty. Several months later the Department of Justice agreed on a settlement agreement with Microsoft.
Major features
IE5 introduced many new or improved features:
Web Page, Complete
Web Archive (MHTML) (only with Microsoft Outlook Express 5)
Language Encoding (new options such as Install On Demand)
History Explorer Bar (new search and sort options)
Search Explorer Bar (new options for searching)
Favorites (make available offline)
AutoComplete Feature
Windows Radio Bar Toolbar
Ability to set a default HTML Editor
Internet Explorer Repair Tool
FTP Folders allows browsing of FTP and Web-Based Folders from Windows Explorer. (see Shell extension)
Approved Sites (PICS not required for listed sites option)
Hotmail Integration
There was also a Microsoft Internet Explorer 5 Resource Kit
Compatibility Option allowed Internet Explorer 4 to be run side by side with IE 5, although IE 5.5 would be the last version with this feature.
XMLHTTPRequest support via ActiveX, making IE 5 the earliest AJAX-capable browser
Bundled software
IE5 for Windows came with Windows Media Player 6.0 (with new Real Audio codecs), NetMeeting 2.11, Chat 2.5 and FrontPage Express 2.0. Other optional installs included Offline Browsing Pack, Internet Explorer Core Web Fonts, and Visual Basic Scripting (VBScript) support. Internet Explorer versions 5.0 and 5.5 are no longer available from Microsoft.
System and hardware requirements
Adoption capability overview
IE 5.01 SP2 is the last version to support Windows 3.1x and Windows NT 3.x. Support for 3.1x and NT 3.x was dropped after that, as well as support for HP-UX, Solaris, the classic Mac OS, and Mac OS X. Windows 2000 was the last to support IE 5.0 (with which it was released) well after support in other Windows systems was deprecated. IE 5.5 SP2 is the last version to support Windows 95 and Windows NT 4.0 versions below SP6a, but above SP2. In addition, users of Windows NT 4.0 SP6a, Windows 98, Windows 2000 and Windows ME could upgrade to IE 6.0 SP1. IE5 was not developed for 68k Macs, support for which had been dropped in Internet Explorer 4.5.
Windows software
Windows 32-bit versions, including Windows 95, Windows 98, Windows NT 3.51, Windows NT 4.0, and Windows 2000
Windows 16-bit versions, including Windows 3.1 and Windows for Workgroups 3.11
Note: Although Windows NT version 3.51 is a 32-bit platform, it must run the 16-bit version of Internet Explorer.
UNIX, including Sun Solaris 2.5.1, Sun Solaris 2.6, and Hewlett Packard HP-UX
PC hardware
Internet Explorer 5.0 for 32-bit Windows Operating Systems
Minimum Requirements: 486DX/66 MHz or higher, Windows 95/98, 12MB RAM, 56MB disk space.
Download Size: 37 MB
There was also a 380 KB active installer that only downloaded selected components
Internet Explorer 5.0 for 16-bit Windows Operating Systems
Minimum Requirements: 486DX or higher, Windows 3.1 or NT 3.5, 12 MB RAM for browser only installation (16 MB RAM if using the Java VM). 30 MB disk space to run setup.
Download Size: 9.4 MB
Apple Macintosh
Internet Explorer 5 for Apple Macintosh requirements:
PowerPC processor
Mac OS version 7.6.1 or later
8 MB RAM plus Virtual Memory
12 MB hard disk space
QuickTime 3.0 or later
Open Transport 1.2 or later
Versions
Early versions of Mac OS X shipped with Internet Explorer for Mac v5.1 as the default web browser, only until Mac OS X 10.2, where the default web browser in Mac OS X Panther is Safari.
See also
Browser timeline
Comparison of web browsers
History of the Internet
References
External links
Internet Explorer Architecture
Internet Explorer Community—The official Microsoft Internet Explorer Community
Internet Explorer History
1999 software
Gopher clients
Internet Explorer
Discontinued internet suites
Macintosh web browsers
MacOS web browsers
POSIX web browsers
Windows 98
Windows components
Windows ME
Windows web browsers
Windows 2000 |
2563492 | https://en.wikipedia.org/wiki/Reliability%20%28computer%20networking%29 | Reliability (computer networking) | In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum.
Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.
Transmission Control Protocol (TCP), the main protocol used on the Internet, is a reliable unicast protocol. UDP is an unreliable protocol and is often used in computer games, streaming media or in other situations where speed is an issue and some data loss may be tolerated because of the transitory nature of the data.
Often, a reliable unicast protocol is also connection oriented. For example, TCP is connection oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. However, some unreliable protocols are connection oriented, such as Asynchronous Transfer Mode and Frame Relay. In addition, some connectionless protocols, such as IEEE 802.11, are reliable.
History
Building on the packet switching concepts proposed by Donald Davies, the first communication protocol on the ARPANET was a reliable packet delivery procedure to connect its hosts via the 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor (IMP). Once the message was delivered to the destination host, an acknowledgment was delivered to the sending host. If the network could not deliver the message, the IMP would send an error message back to the sending host.
Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet.
If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design principles.
Reliability properties
A reliable service is one that notifies the user if delivery fails, while an unreliable one does not notify the user if delivery fails. For example, Internet Protocol (IP) provides an unreliable service. Together, Transmission Control Protocol (TCP) and IP provide a reliable service, whereas User Datagram Protocol (UDP) and IP provide an unreliable one.
In the context of distributed protocols, reliability properties specify the guarantees that the protocol provides with respect to the delivery of messages to the intended recipient(s).
An example of a reliability property for a unicast protocol is "at least once", i.e. at least one copy of the message is guaranteed to be delivered to the recipient.
Reliability properties for multicast protocols can be expressed on a per-recipient basis (simple reliability properties), or they may relate the fact of delivery or the order of delivery among the different recipients (strong reliability properties). In the context of multicast protocols, strong reliability properties express the guarantees that the protocol provides with respect to the delivery of messages to different recipients.
An example of a strong reliability property is last copy recall, meaning that as long as at least a single copy of a message remains available at any of the recipients, every other recipient that does not fail eventually also receives a copy. Strong reliability properties such as this one typically require that messages are retransmitted or forwarded among the recipients.
An example of a reliability property stronger than last copy recall is atomicity. The property states that if at least a single copy of a message has been delivered to a recipient, all other recipients will eventually receive a copy of the message. In other words, each message is always delivered to either all or none of the recipients.
One of the most complex strong reliability properties is virtual synchrony.
Reliable messaging is the concept of message passing across an unreliable infrastructure whilst being able to make certain guarantees about the successful transmission of the messages. For example, that if the message is delivered, it is delivered at most once, or that all messages successfully delivered arrive in a particular order.
Reliable delivery can be contrasted with best-effort delivery, where there is no guarantee that messages will be delivered quickly, in order, or at all.
Implementations
A reliable delivery protocol can be built on an unreliable protocol. An extremely common example is the layering of Transmission Control Protocol on the Internet Protocol, a combination known as TCP/IP.
Strong reliability properties are offered by group communication systems (GCSs) such as IS-IS, Appia framework, Spread, JGroups or QuickSilver Scalable Multicast. The QuickSilver Properties Framework is a flexible platform that allows strong reliability properties to be expressed in a purely declarative manner, using a simple rule-based language, and automatically translated into a hierarchical protocol.
One protocol that implements reliable messaging is WS-ReliableMessaging, which handles reliable delivery of SOAP messages.
The ATM Service-Specific Coordination Function provides for transparent assured delivery with AAL5.
IEEE 802.11 attempts to provide reliable service for all traffic. The sending station will resend a frame if the sending station doesn't receive an ACK frame within a predetermined period of time.
Real-time systems
There is, however, a problem with the definition of reliability as "delivery or notification of failure" in real-time computing. In such systems, failure to deliver the real-time data will adversely affect the performance of the systems, and some systems, e.g. safety-critical, safety-involved, and some secure mission-critical systems, must be proved to perform at some specified minimum level. This, in turn, requires that a specified minimum reliability for the delivery of the critical data be met. Therefore, in these cases, it is only the delivery that matters; notification of the failure to deliver does ameliorate the failure. In hard real-time systems, all data must be delivered by the deadline or it is considered a system failure. In firm real-time systems, late data is still valueless but the system can tolerate some amount of late or missing data.
There are a number of protocols that are capable of addressing real-time requirements for reliable delivery and timeliness:
MIL-STD-1553B and STANAG 3910 are well-known examples of such timely and reliable protocols for avionic data buses. MIL-1553 uses a 1 Mbit/s shared media for the transmission of data and the control of these transmissions, and is widely used in federated military avionics systems. It uses a bus controller (BC) to command the connected remote terminals (RTs) to receive or transmit this data. The BC can, therefore, ensure that there will be no congestion, and transfers are always timely. The MIL-1553 protocol also allows for automatic retries that can still ensure timely delivery and increase the reliability above that of the physical layer. STANAG 3910, also known as EFABus in its use on the Eurofighter Typhoon, is, in effect, a version of MIL-1553 augmented with a 20 Mbit/s shared media bus for data transfers, retaining the 1 Mbit/s shared media bus for control purposes.
The Asynchronous Transfer Mode (ATM), the Avionics Full-Duplex Switched Ethernet (AFDX), and Time Triggered Ethernet (TTEthernet) are examples of packet-switched networks protocols where the timeliness and reliability of data transfers can be assured by the network. AFDX and TTEthernet are also based on IEEE 802.3 Ethernet, though not entirely compatible with it.
ATM uses connection-oriented virtual channels (VCs) which have fully deterministic paths through the network, and usage and network parameter control (UPC/NPC), which are implemented within the network, to limit the traffic on each VC separately. This allows the usage of the shared resources (switch buffers) in the network to be calculated from the parameters of the traffic to be carried in advance, i.e. at system design time. That they are implemented by the network means that these calculations remain valid even when other users of the network behave in unexpected ways, i.e. transmit more data than they are expected to. The calculated usages can then be compared with the capacities of these resources to show that, given the constraints on the routes and the bandwidths of these connections, the resource used for these transfers will never be over-subscribed. These transfers will therefore never be affected by congestion and there will be no losses due to this effect. Then, from the predicted maximum usages of the switch buffers, the maximum delay through the network can also be predicted. However, for the reliability and timeliness to be proved, and for the proofs to be tolerant of faults in and malicious actions by the equipment connected to the network, the calculations of these resource usages cannot be based on any parameters that are not actively enforced by the network, i.e. they cannot be based on what the sources of the traffic are expected to do or on statistical analyses of the traffic characteristics (see network calculus).
AFDX uses frequency domain bandwidth allocation and traffic policing, that allows the traffic on each virtual link (VL) to be limited so that the requirements for shared resources can be predicted and congestion prevented so it can be proved not to affect the critical data. However, the techniques for predicting the resource requirements and proving that congestion is prevented are not part of the AFDX standard.
TTEthernet provides the lowest possible latency in transferring data across the network by using time-domain control methods – each time triggered transfer is scheduled at a specific time so that contention for shared resources is controlled and thus the possibility of congestion is eliminated. The switches in the network enforce this timing to provide tolerance of faults in, and malicious actions on the part of, the other connected equipment. However, "synchronized local clocks are the fundamental prerequisite for time-triggered communication". This is because the sources of critical data will have to have the same view of time as the switch, in order that they can transmit at the correct time and the switch will see this as correct. This also requires that the sequence with which a critical transfer is scheduled has to be predictable to both source and switch. This, in turn, will limit the transmission schedule to a highly deterministic one, e.g. the cyclic executive.
However, low latency in transferring data over the bus or network does not necessarily translate into low transport delays between the application processes that source and sink this data. This is especially true where the transfers over the bus or network are cyclically scheduled (as is commonly the case with MIL-STD-1553B and STANAG 3910, and necessarily so with AFDX and TTEthernet) but the application processes are not synchronized with this schedule.
With both AFDX and TTEthernet, there are additional functions required of the interfaces, e.g. AFDX's Bandwidth Allocation Gap control, and TTEthernet's requirement for very close synchronization of the sources of time-triggered data, that make it difficult to use standard Ethernet interfaces. Other methods for control of the traffic in the network that would allow the use of such standard IEEE 802.3 network interfaces is a subject of current research.
References
Network protocols
Reliability engineering |
31230857 | https://en.wikipedia.org/wiki/Amiga%20music%20software | Amiga music software | This article deals with music software created for the Amiga line of computers and covers the AmigaOS operating system and its derivates AROS and MorphOS and is a split of main article Amiga software.
See also related articles Amiga productivity software, Amiga programming languages, Amiga Internet and communications software and Amiga support and maintenance software for other information regarding software that run on Amiga.
Noteworthy Amiga music software
Samplitude by SEK'D (Studio fuer Elektronische Klangerzeugung Dresden), Instant Music, DMCS (DeLuxe Music) 1 and 2, Music-X, TigerCub, Dr. T's KCS, Dr. T's Midi Recording Studio, Bars and Pipes (from Blue Ribbon Soundworks, a firm which was bought from Microsoft and it is now part of its group. Bars and Pipes internal structure then inspired to create audio streaming data passing of DirectX libraries), AEGIS Audio Master, Pro Sound Designer, AEGIS Sonix, Audio Sculpture, Audition 4 from SunRize Industries, SuperJAM!, HD-Rec, Audio Evolution, RockBEAT drum machine and various MIDI sequencing programs by Gajits Music Software.
Audio Digitizers Software
Together with the well known Dr. T's Midi Recording Studio, Pro Sound Designer, Sonix, SoundFX, Audition 4, HD-Rec, and Audio Evolution, there was also much Amiga software to pilot digitzers such as GVP DSS8 Plus 8bit audio sampler/digitizer for Amiga, Sunrize AD512 and AD516 professional 12 and 16-bit DSP sound cards for the Amiga that included Studio-16 as standard software, Soundstage professional 20-bit DSP expansion sound card for the Amiga, Aura 12-bit sound sampler which is connected to the PCMCIA port of Amiga 600 and Amiga 1200 models, and the Concierto 16-bit sound card optional module to be added to the Picasso IV graphic card, etcetera.
Sound design / SoftSynth
Synthia, FMSynth by Christian Stiens (inspired by Yamaha's FM-operating DX Series), Assampler, SoundFX (a.k.a. SFX), WaveTracer, S.A.M. Sample-Synthesizer and Gajits' CM-Panion and 4D Companion patch editors.
Mod music file format
Starting from 1987 with the release of Soundtracker, trackers became a new type of music programs which spawned the mod (module) audio file standard. The Mod audio standard is considered the audio format that started it all in the world of computer music. After Soundtracker many clones (which often were reverse engineered and improved) appeared, including Noisetracker, Startrekker, Protracker. Also many derivatives appeared, including OctaMED and Oktalyzer.
In the period from 1985 to 1995 when Amiga audio (which was standard in Amiga computers) was of greater quality than other standard home computers, PC compatible systems began to be equipped with 8-bit audio cards inserted into 16-bit ISA bus slots. Soundtracker Module files were used on PC computers and were considered the only serious 8-bit audio standard for creating music. The worldwide usage of these programs led to the creation of the so-called MOD-scene which was considered part of the demoscene. Eventually the PC world evolved to 16-bit audio cards, and Mod files were slowly abandoned. Various Amiga and PC games (such as Worms) supported Mod as their internal standard for generating music and audio effects.
Some trackers can use both sampled sounds and can synthesize sounds. AHX and Hively Tracker are special trackers in that they can't use samples, but can synthesize the sound created by Commodore 64 computers.
Some modern Amiga trackers are DigiBooster Pro and Hively Tracker.
Development of popular Amiga tracker OctaMED SoundStudio was handed over to a third party several times but the first two parties failed to produce useful results. A third attempt at creating an update will be undertaken by the current developer of Bars 'n Pipes.
MOD filetype evolution
Initially trackers (and the mod format) were limited to 4 channel, 8-bit audio (due to restrictions of the built-in soundchip) and 15 (and later 31) sampled instruments. By using software mixing some trackers achieved 6, 7 or 8 channel sound at the cost of CPU time and audio quality.
Modern trackers can handle 128+ channel, 16-bit audio quality and can often handle up to 256 instruments. Some even support software synthesizer plugins as instruments.
Speech synthesis
The original Amiga was launched with speech synthesis software, developed by Softvoice, Inc. (see: Text2Speech System). This could be broken into three main components: narrator.device, which could enunciate phonemes expressed as ARPABET, translator.library which could translate English text to American English phonemes, and the SPEAK: handler, which any application including the command-line could redirect output to, to have it spoken. Reading SPEAK: as it is producing speech will return two numbers which are the size ratio of the width and height of a mouth producing the phoneme being spoken.
In the original 1.x releases, a Say program demo was included with AmigaBASIC programming examples. From the 2.05 release on, narrator.device and translator.library were no longer present in the operating system but could still be used if copied over from older disks.
The speak handler was not just a curiosity, or a gorgeous demonstration of capabilities of Amiga. In fact, the word processor ProWrite since its version 3.2 was able to read an entire document using the speech synthesizer for the benefit of blind users.
See also
List of music software
References
Kato Development Group stops OctaMED development
Online version of Amiga ROM Manual, Amiga ROM Kernal Reference Manual: Devices", 3rd edition, Commdore Inc., published by Addison & Wesley
External links
AmiWorld list of Amiga software Italian site reporting a list of all known productivity programs for Amiga.
THE comprehensive database about Amiga Software
The classicamiga Software Directory An Amiga directory project aiming to catalogue all known Amiga software.
The Amiga ROM Kernel Reference Manual: Devices (3rd ed.), published by Addison Wesley (1991)
Amiga
Lists of software
fr:Liste de logiciels Amiga |
9360778 | https://en.wikipedia.org/wiki/Proxy%20list | Proxy list | A proxy list is a list of open HTTP/HTTPS/SOCKS proxy servers all on one website. Proxies allow users to make indirect network connections to other computer network services. Proxy lists include the IP addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. Proxy lists are often organized by the various proxy protocols the servers use. Many proxy lists index Web proxies, which can be used without changing browser settings.
Proxy Anonymity Levels
Elite proxies - Such proxies do not change request fields and look like a real browser, and your real IP address is hidden. Server administrators will commonly be fooled into believing that you are not using a proxy.
Anonymous proxies - These proxies do not show a real IP address, however, they do change the request fields, therefore it is very easy to detect that a proxy is being used by log analysis. You are still anonymous, but some server administrators may restrict proxy requests.
Transparent proxies - (not anonymous, simply HTTP) - These change the request fields and they transfer the real IP. Such proxies are not applicable for security or privacy uses while surfing the web, and should only be used for network speed improvement.
SOCKS is a protocol that relays TCP sessions through a firewall host to allow application users transparent access across the firewall. Because the protocol is independent of application protocols, it can be (and has been) used for many different services, such as telnet, FTP, finger, whois, gopher, WWW, etc. Access control can be applied at the beginning of each TCP session; thereafter the server simply relays the data between the client and the application server, incurring minimum processing overhead. Since SOCKS never has to know anything about the application protocol, it should also be easy for it to accommodate applications that use encryption to protect their traffic from nosy snoopers. No information about the client is sent to the server – thus there is no need to test the anonymity level of the SOCKS proxies.
External links
Computer network security
Computer networking
Internet privacy
Computer security software |
59237118 | https://en.wikipedia.org/wiki/Manchester%20Packet%20%281806%20ship%29 | Manchester Packet (1806 ship) | Manchester Packet was built at New York in 1806. She immediately transferred to British registry and spent a number of years trading across the Atlantic. In 1814 she successfully repelled an attack by a U.S. privateer. In 1818 she returned to U.S. registry. She eventually became a whaler operating out of New London, Connecticut. In May 1828 she made the first of five whaling voyages; she was condemned in 1835 while on her sixth voyage.
Merchantman
Manchester Packet first entered Lloyd's Register in 1806 with P.T. Coffin, master, "New York" owner, and trade Liverpool–New York.
Lloyd's Register for 1810 carries the same information, as does the Register of Shipping, except that it gives her owner as Capt. & Co.
Manchester Packet 20 December 1814 engaged an American privateer and had to refit at Salvador, Bahia.
Lloyd's Register for 1818 still carried Manchester Packet. It listed her master as P.T. Coffin, her owner as "New York", and her trades as Liverpool–New York. She was no longer listed in 1819.
In 1824 Manchester Packet twice carried specie in the form of dollars from Havre, France, to New York at the behest of the U.S. Government. On 25 July she delivered $14,800 and on 27 October she delivered $4,840.
Whaler
Whaling voyage #1 (1828–1829): Captain Maxwell (or Marshall) Griffing sailed from New London in May 1828. Manchester Packet returned in June 1829 with 1343 barrels of whale oil.
Whaling voyage #2 (1829–1830): Captain James Fordham sailed from New London in June 1829. Manchester Packet returned in June 1830 with 1194 barrels.
Whaling voyage #3 (1830–1831): Captain Fordham sailed from New London in July 1830. Manchester Packet returned in June 1831 with 23 barrels of sperm oil and 947 barrels of whale oil.
Whaling voyage #4 (1831–1832): Captain Robert N. Tate sailed from New London in 1831. Manchester Packet returned on 27 February 1832.
Whaling voyage #5 (1832–1833): Captain David Reed sailed from New London in 1832. Manchester Packet returned on 3 October 1833 with 230 barrels of sperm oil and 1436 barrels of whale oil.
Loss
Captain David Reed sailed from New London in November 1833. Lloyd's List reported on 18 September 1835 that Manchester Packet, Reid, master, had put into the River Gambia on 27 August, leaky. She was condemned there.
Citations
1806 ships
Ships built in the United States
Age of Sail merchant ships of England
Age of Sail merchant ships of the United States
Whaling ships
Maritime incidents in August 1835 |
68334 | https://en.wikipedia.org/wiki/Desktop%20environment | Desktop environment | In computing, a desktop environment (DE) is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system that share a common graphical user interface (GUI), sometimes described as a graphical shell. The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to easily access and edit files, while they usually do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface (CLI) is still used when full control over the operating system is required.
A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP). A GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows.
While the term desktop environment originally described a style of user interfaces following the desktop metaphor, it has also come to describe the programs that realize the metaphor itself. This usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, and GNOME.
Implementation
On a system that offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are generally responsible for most of what the user sees. The window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look and behavior.
A windowing system of some sort generally interfaces directly with the underlying operating system and libraries. This provides support for graphical hardware, pointing devices, and keyboards. The window manager generally runs on top of this windowing system. While the windowing system may provide some window management functionality, this functionality is still considered to be part of the window manager, which simply happens to have been provided by the windowing system.
Applications that are created with a particular window manager in mind usually make use of a windowing toolkit, generally provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way.
History and common use
The first desktop environment was created by Xerox and was sold with the Xerox Alto in the 1970s. The Alto was generally considered by Xerox to be a personal office computer; it failed in the marketplace because of poor marketing and a very high price tag. With the Lisa, Apple introduced a desktop environment on an affordable personal computer, which also failed in the market.
The desktop metaphor was popularized on commercial personal computers by the original Macintosh from Apple in 1984, and was popularized further by Windows from Microsoft since the 1990s. , the most popular desktop environments are descendants of these earlier environments, including the Windows shell used in Microsoft Windows, and the Aqua environment used in macOS. When compared with the X-based desktop environments available for Unix-like operating systems such as Linux and FreeBSD, the proprietary desktop environments included with Windows and macOS have relatively fixed layouts and static features, with highly integrated "seamless" designs that aim to provide mostly consistent customer experiences across installations.
Microsoft Windows dominates in marketshare among personal computers with a desktop environment. Computers using Unix-like operating systems such as macOS, Chrome OS, Linux, BSD or Solaris are much less common; however, there is a growing market for low-cost Linux PCs using the X Window System or Wayland with a broad choice of desktop environments. Among the more popular of these are Google's Chromebooks and Chromeboxes, Intel's NUC, the Raspberry Pi, etc.
On tablets and smartphones, the situation is the opposite, with Unix-like operating systems dominating the market, including the iOS (BSD-derived), Android, Tizen, Sailfish and Ubuntu (all Linux-derived). Microsoft's Windows phone, Windows RT and Windows 10 are used on a much smaller number of tablets and smartphones. However, the majority of Unix-like operating systems dominant on handheld devices do not use the X11 desktop environments used by other Unix-like operating systems, relying instead on interfaces based on other technologies.
Desktop environments for the X Window System
On systems running the X Window System (typically Unix-family systems such as Linux, the BSDs, and formal UNIX distributions), desktop environments are much more dynamic and customizable to meet user needs. In this context, a desktop environment typically consists of several separate components, including a window manager (such as Mutter or KWin), a file manager (such as Files or Dolphin), a set of graphical themes, together with toolkits (such as GTK+ and Qt) and libraries for managing the desktop. All these individual modules can be exchanged and independently configured to suit users, but most desktop environments provide a default configuration that works with minimal user setup.
Some window managerssuch as IceWM, Fluxbox, Openbox, ROX Desktop and Window Makercontain relatively sparse desktop environment elements, such as an integrated spatial file manager, while others like evilwm and wmii do not provide such elements. Not all of the program code that is part of a desktop environment has effects which are directly visible to the user. Some of it may be low-level code. KDE, for example, provides so-called KIO slaves which give the user access to a wide range of virtual devices. These I/O slaves are not available outside the KDE environment.
In 1996 the KDE was announced, followed in 1997 by the announcement of GNOME. Xfce is a smaller project that was also founded in 1996, and focuses on speed and modularity, just like LXDE which was started in 2006. A comparison of X Window System desktop environments demonstrates the differences between environments. GNOME and KDE were usually seen as dominant solutions, and these are still often installed by default on Linux systems. Each of them offers:
To programmers, a set of standard APIs, a programming environment, and human interface guidelines.
To translators, a collaboration infrastructure. KDE and GNOME are available in many languages.
To artists, a workspace to share their talents.
To ergonomics specialists, the chance to help simplify the working environment.
To developers of third-party applications, a reference environment for integration. OpenOffice.org is one such application.
To users, a complete desktop environment and a suite of essential applications. These include a file manager, web browser, multimedia player, email client, address book, PDF reader, photo manager, and system preferences application.
In the early 2000s, KDE reached maturity. The Appeal and ToPaZ projects focused on bringing new advances to the next major releases of both KDE and GNOME respectively. Although striving for broadly similar goals, GNOME and KDE do differ in their approach to user ergonomics. KDE encourages applications to integrate and interoperate, is highly customizable, and contains many complex features, all whilst trying to establish sensible defaults. GNOME on the other hand is more prescriptive, and focuses on the finer details of essential tasks and overall simplification. Accordingly, each one attracts a different user and developer community. Technically, there are numerous technologies common to all Unix-like desktop environments, most obviously the X Window System. Accordingly, the freedesktop.org project was established as an informal collaboration zone with the goal being to reduce duplication of effort.
As GNOME and KDE focus on high-performance computers, users of less powerful or older computers often prefer alternative desktop environments specifically created for low-performance systems. Most commonly used lightweight desktop environments include LXDE and Xfce; they both use GTK+, which is the same underlying toolkit GNOME uses. The MATE desktop environment, a fork of GNOME 2, is comparable to Xfce in its use of RAM and processor cycles, but is often considered more as an alternative to other lightweight desktop environments.
For a while, GNOME and KDE enjoyed the status of the most popular Linux desktop environments; later, other desktop environments grew in popularity. In April 2011, GNOME introduced a new interface concept with its version 3, while a popular Linux distribution Ubuntu introduced its own new desktop environment, Unity. Some users preferred to keep the traditional interface concept of GNOME 2, resulting in the creation of MATE as a GNOME 2 fork.
Examples of desktop environments
The most common desktop environment on personal computers is Windows Shell in Microsoft Windows. Microsoft has made significant efforts in making Windows shell visually pleasing. As a result, Microsoft has introduced theme support in Windows 98, the various Windows XP visual styles, the Aero brand in Windows Vista, the Microsoft design language (codenamed "Metro") in Windows 8, and the Fluent Design System and Windows Spotlight in Windows 10. Windows shell can be extended via Shell extensions.
Mainstream desktop environments for Unix-like operating systems use the X Window System, and include KDE, GNOME, Xfce, LXDE, and Aqua, any of which may be selected by users and are not tied exclusively to the operating system in use.
A number of other desktop environments also exist, including (but not limited to) CDE, EDE, GEM, IRIX Interactive Desktop, Sun's Java Desktop System, Jesktop, Mezzo, Project Looking Glass, ROX Desktop, UDE, Xito, XFast. Moreover, there exists FVWM-Crystal, which consists of a powerful configuration for the FVWM window manager, a theme and further adds, altogether forming a "construction kit" for building up a desktop environment.
X window managers that are meant to be usable stand-alone — without another desktop environment — also include elements reminiscent of those found in typical desktop environments, most prominently Enlightenment. Other examples include OpenBox, Fluxbox, WindowLab, Fvwm, as well as Window Maker and AfterStep, which both feature the NeXTSTEP GUI look and feel. However newer versions of some operating systems make self configure.
The Amiga approach to desktop environment was noteworthy: the original Workbench desktop environment in AmigaOS evolved through time to originate an entire family of descendants and alternative desktop solutions. Some of those descendants are the Scalos, the Ambient desktop of MorphOS, and the Wanderer desktop of the AROS open source OS. WindowLab also contains features reminiscent of the Amiga UI. Third-party Directory Opus software, which was originally just a navigational file manager program, evolved to become a complete Amiga desktop replacement called Directory Opus Magellan.
OS/2 (and derivatives such as eComStation and ArcaOS) use the Workplace Shell. Earlier versions of OS/2 used the Presentation Manager.
The BumpTop project was an experimental desktop environment. Its main objective is to replace the 2D paradigm with a "real-world" 3D implementation, where documents can be freely manipulated across a virtual table.
Gallery
See also
Wayland – an alternative to X Windows which can run several different desktop environments
References |
38889092 | https://en.wikipedia.org/wiki/Hearthstone | Hearthstone | Hearthstone is a free-to-play online digital collectible card game developed and published by Blizzard Entertainment. Originally subtitled Heroes of Warcraft, Hearthstone builds upon the existing lore of the Warcraft series by using the same elements, characters, and relics. It was first released for Microsoft Windows and macOS in March 2014, with ports for iOS and Android releasing later that year. The game features cross-platform play, allowing players on any supported device to compete with one another, restricted only by geographical region account limits.
The game is a turn-based card game between two opponents, using constructed decks of 30 cards along with a selected hero with a unique power. Players use their limited mana crystals to play abilities or summon minions to attack the opponent, with the goal of destroying the opponent's hero. Winning matches and completing quests earn in-game gold, rewards in the form of new cards, and other in-game prizes. Players can then buy packs of new cards through gold or microtransactions to customize and improve their decks. The game features several modes of play, including casual and ranked matches, drafted arena battles, and single-player adventures. New content for the game involves the addition of new card sets and gameplay, taking the form of either expansion packs or adventures that reward the player with collectible cards upon completion.
In contrast to other games developed by Blizzard, Hearthstone was an experimental game developed by a smaller team based on the appreciation of collectible card games at the company. The game was designed to avoid pitfalls of other digital collectible card games by eliminating any possible plays from an opponent during a player's turn and by replicating the feel of a physical card game within the game's user interface. Many of the concepts as well as art assets were based on those previously published in the physical World of Warcraft Trading Card Game.
The game has been favorably reviewed by critics and has been a success for Blizzard, earning nearly per month as of August 2017. , Blizzard has reported more than 100 million Hearthstone players. The game has become popular as an esport, with cash prize tournaments hosted by Blizzard and other organizers.
Gameplay
[[File:Hearthstone screenshot 2.png|thumb|right|Hearthstones collection interface displaying cards for the Mage class]]
Set within the Warcraft universe, Hearthstone is a digital-only, turn-based collectible card game which pits two opponents against each other. Players select a hero from one of ten classes. All classes have unique cards and abilities, known as hero powers, which help define class archetypes. Each player uses a deck of cards from their collection with the end goal being to reduce the opponent's health to zero.
There are four different types of cards: minions, spells, weapons, and hero cards. Quests are a specific type of spell only found in three expansions. These cards are ordered by rarity, with Legendary cards being the rarest, followed by Epic, Rare, Common, and Basic. Blizzard releases expansions of additional cards every four months to increase the variety in the metagame. The game uses a freemium model of revenue, meaning players can play for free or pay to acquire additional card packs or content.
Unlike other card games such as Magic: The Gathering, Hearthstone was designed to speed up play by eliminating any manual reactions from the opposing player during a player's turn, and setting a timer for each player's turn. During a turn, players play cards from their hand using "mana", a budget each player must abide by which increases by one each turn with a maximum of ten, and with cards having various mana costs. This invokes strategy as the player must plan ahead, taking into account what cards can and cannot be played. Minions and spells are unique.
Minions will be placed directly onto the board after being played and may carry special effects like Charge or Deathrattle, allowing the minion to attack instantly or making the minion do something special upon death, respectively. Spells have distinctive effects and affect the board in various ways.
Cards can be obtained through opening card packs or by crafting them with arcane dust.
Game modes
The normal gameplay mode is one-on-one matches between a player and a randomly selected human opponent. Within this, the Standard game mode uses prepared decks limited to cards from the Core set alongside the expansions from the last two years. A separate Wild game mode allows all past and present cards to be used subject to deck construction rules. Both Standard and Wild game modes are divided into Casual and Ranked modes. Players can climb the tiered ranking system in Ranked, while Casual allows for a more relaxed play-style. At the end of each month the Ranked season ends, rewarding players with in-game items depending on their performance.
Other more specialized multiplayer modes include the following:
Arena has the player draft a deck of thirty cards from choices of three cards over several rounds. Players continue to use this deck against other Arena decks until they win or loses a number of matches, after which the deck is retired and players gain in-game rewards based on their record.
Tavern Brawls are challenges that change weekly and may impose unusual deck-building guidelines.
Battlegrounds, introduced in November 2019, is based on the auto battler genre, allowing eight players to compete in each match by recruiting minions over several rounds. Players are paired off randomly in each round, with combat between minions played out automatically, with the goal of having minions remaining to damage the opponent's hero, and ultimately be the last hero standing. The top 4 hero's place and earn a win and increase rating points while the bottom 4 earn a loss and decrease rating points.
Duels, introduced in October 2020, is a multiplayer version of Hearthstone's singleplayer "Dungeon Run" game mode. Players start with a 15-card deck they assemble themselves, and (like Arena) battle other players until they win or lose a number of matches, after which the deck is retired and players gain in-game rewards based on their record. After each match, the player chooses between three 'buckets' of three cards each, or a treasure card to add to their deck. Unlike Arena, there is a casual mode that requires no entry fee.
Classic mode uses a mirror of the player's library of all cards that were in the game as of the June 2014 release of the game, reverting any updates or changes to these cards in the interim, effectively representing the game's start at the time of its release.
Mercenaries, introduced in October 2021, is focused on a party-based combat system with roguelike mechanics. A player creates a party from six minions from a central Minion Village and uses that party to complete various quests, both as player-versus-environment and player-versus-player. Battles in this model use a color-coded system similar to rock paper scissors where minions of one color are strong against another color but weak to the third color. Players use this system and minion abilities to try to win battles. With loot gained from combat success, players can use facilities in the Minion Village to improve the attributes and abilities of individual minions or recruit new minions.
In addition to these multiplayer modes, there are solo adventures. These adventures offer alternative ways to play and are designed specifically to challenge the player.
Card sets
The following table lists the card set releases by their name, type, North American date of release (with the release in other regions typically within a day afterward), the date of the expansion's removal from the Standard format, and the distribution of cards within that set.
Initially, Blizzard introduced an alternating series of Expansions and Adventures, with roughly three new sets released each year. Expansions are new card sets, containing between 100 and 200 new cards, that become available to buy or win, as well as introducing new mechanics to the gameplay. Adventures feature smaller number of cards, around 30, which can only be earned by completing multiple tiers of story-based challenges and boss fights in single-player mode.
In 2017, Blizzard changed their approach, and focused on Expansions and mini-sets for cards, with adventures providing non-card rewards.
Later, Blizzard moved away from Adventures as they found that because Adventures gated the set's cards until the challenges were completed, these cards did not readily enter the meta-game, and when they did, they would be used more by expert players who could easily complete the Adventures' challenges compared to amateur players. Blizzard recognized that players do enjoy the single-player narrative events and have worked in quests and missions around the new card sets for those players. Examples of these quests and missions include facing the bosses of Icecrown Citadel with Knights of the Frozen Throne's release, and the new dungeon run feature which appeared in the Kobolds & Catacombs expansion.
Blizzard has adopted a "Year" moniker to identify when expansions rotate and retire from Standard format. At the commencement of the first year, "Year of the Kraken" (from April 2016 to April 2017), Blizzard retired the Curse of Naxxramas and Goblins vs Gnomes sets. At the commencement of the second year, "Year of the Mammoth" (from April 2017 to April 2018), Blizzard retired the Blackrock Mountain, The Grand Tournament and League of Explorers sets. At the commencement of the third year, "Year of the Raven" (April 2018 to early 2019), Blizzard retired the Whispers of the Old Gods, One Night in Karazhan and Mean Streets of Gadgetzan sets. Initially, after such time as the adventures and expansions were retired, these sets were no longer available for purchase. However, due to player demand in July 2017, players were again able to purchase these retired sets and all future sets that are retired from Standard by using real money on Blizzard's online store. In the "Year of the Mammoth", Standard moved some Classic cards to the "Hall of Fame" set that is not playable in Standard but the cards still can be obtained and are available to play in Wild format. In the "Year of the Raven", three additional Classic cards were moved to the "Hall of Fame" set.
In 2021, Blizzard introduced an annually rotating Core set that can be used in Standard and Wild modes. The first iteration of the set consists of 235 cards: 31 new ones and 204 selected from various non-Standard sets. The Core set is free to use for all players ranked at least level 10 with all classes. With the introduction of the Core set, the Basic, Classic, and Hall of Fame sets were grouped into a Legacy set confined to the Wild mode. Alongside the Core set, Classic mode was introduced where only the original 2014 versions of cards from the old Classic set can be used.
Development
Conception
Development of Hearthstone at Blizzard was inspired by two directions, according to developer Eric Dodds: a desire for Blizzard to develop something more experimental with a smaller team in contrast to their larger projects, and the shared love of collectible card games throughout the company. Blizzard executives, around 2008, had considered that their revenue was primarily sustained on three well-established properties (the Warcraft, StarCraft, and Diablo series), but saw the rise of small independent developers with highly successful projects, representing a shift in the traditional video game model. To explore this new direction, Blizzard brought a number of people into "Team 5", named after being the fifth development team formed at Blizzard. Initially, the team had between 12 and 15 members, in contrast to other Blizzard games with teams in excess of 60 members. By November 2015, the team had 47 members.
Of the game types they explored, Team 5 soon focused on the collectible card game approach, given that many on the team and in Blizzard had played such games since their introduction. The team found it natural to build the card game around the existing Warcraft lore; according to production director Jason Chayes, Warcraft was already a well-known property, and the depth of characters and locations created for other games in that series made it easy to create cards inspired by those. They also saw that new players to Warcraft may be drawn into the other games through playing Hearthstone.
The team was able to pull concepts and art from the pre-existing World of Warcraft Trading Card Game, first published in 2006 by Upper Deck and later by Cryptozoic Entertainment; when Hearthstone was near completion, in 2013, Blizzard terminated its license with Cryptozoic as to favor their pending digital card game. The addition of heroes, an aspect from the previous trading card game, was found to help personalize the game for the player to allow players to discover useful combinations of cards for each hero.
Game design and programming
After about a year of starting development, the team had produced an Adobe Flash prototype of the game that offered the core card game mechanics and the unique heroes. At this point, several on Team 5 were temporarily moved into other teams to complete the release of StarCraft II: Wings of Liberty. This 10-to-11 month period was considered fortuitous by the team, according to Chayes. Principal designers Dodds and Ben Brode remained developing Hearthstone, and the two were able to quickly iterate many ideas using both the prototype and physical replicas to fine-tune the game mechanics. Secondly, those that were put on StarCraft II came back with ideas based on StarCrafts asymmetric gameplay to help balance the various heroes while still creating a unique characterization and play-style for each.
Further development on the game's user interface began using the principle that the game needed to be both accessible and charming, being able to draw in new players and to be clear how to play the game. Unity is used as the game engine in the interest of speed and to make the game run smoother since the server is where all of the rules exist and calculations happen then the server tells the client what happened. Dodds stated that "it's important that you don't have to spend a lot of time understanding the rules to play the game, the depth grows as you go." Gameplay elements such as pre-made decks for each hero, deck building helps, and visual cues on which cards could be played were used to guide new players. Card text was written in a way so that a new player should be able to immediately understand the effects.
From the beginning, the game was designed to be played solely online and to mimic the feel of physical cards to make it more accessible to new players. Dodds found that past attempts to digitize physical card games by other companies left areas they felt were lacking, and wanted to improve on that experience. One particular example are card games where players have the ability to react to other players; Dodds noted that when playing in the same room as another player, these types of interactions are straightforward, but consume a great deal of time in a virtual space. Hearthstone was designed to eliminate any gameplay from the opponent during the player's turn, streamlining the game.
Other aspects of the game's interface were set to replicate the feel of a physical game being watched by an audience: Hearthstone starts with the player opening a box, during gameplay the cards waver and move while in their hand, and cards when played slam down on the board. When attacking, cards leap across the board to strike the target; when a massive spike of damage is dealt, the board shakes; when a massive creature is summoned, the unseen audience gasps in awe. Hearthstone also offers interactive boards. The boards on which the cards are played can be interacted with in various ways, such as virtually petting a dragon, although the feature is purely for entertainment and has no effect on gameplay. This idea came out from the movie Jumanji in which a board game comes to life, and also mimics how physical card players would often toy with their cards while waiting on their opponent.
Unlike physical trading card games, Hearthstone was designed early on without any trading system between players. Hamilton Chu, the executive producer of Hearthstone, stated that "a key thing for us was focusing on [the user]... playing the game", and that trading and market features would dilute this experience. Blizzard wanted to do things such as avoid a free market where card values could fluctuate, discourage cheating methods like bots and duping, reduce the unauthorized third party sales (all against the terms of use), and keep the profit derived from the game for the company.
The game's name, Hearthstone, was meant to imply to a close gathering of friends by a hearth, a goal of what they want players to feel. According to Chayes, they had experimented with other constructs of where these card games would take place, and only about halfway through development came onto the idea of using a pub's hearth as the theme; Chayes stated that with that concept, "this is a great way to play, it works with all our values, it has a lot of charm". To maintain a friendly environment around this construct, they added in the ability to trigger one of a few friendly compliments that can be said by a hero, so that players could still emote to their opponent without having to worry about any vitriol.
Soundtrack
The soundtrack was composed by Peter McConnell; with trailer music by Jason Hayes. According to McConnell and Dodds, who oversaw the music direction, they wanted to create a soundtrack that would reflect the tavern setting they had established for the game, but they did not want to overwhelm this theme. McConnell came upon the idea of mixing Celtic music with blues rock—pondering the idea of "what if ZZ Top or Golden Earring had been transported back in time to the Middle Ages?"—and working in other previous Warcraft themes among the new songs with help from Hayes. Hayes also worked with Glenn Stafford to create short "stingers" of music used when players summon Legendary cards.
Beta changes
The beta testing periods were used to evaluate the game's balance, adjusting cards found to be too powerful or too weak, and making sure no single hero or deck type dominated the game. As they approached the game's release in March 2014, Blizzard found that it was hard to generate interest in getting people to try the game; those they asked to try the game felt that Hearthstone was not the type of game they would be interested in playing. At this point, Blizzard opted to make Hearthstone free to play and while card packs can be bought with in-game currency earned through winning matches and completing quests, players can also buy packs if they do not want to wait on earning currency. This helped to significantly boost the game's popularity on release and led to the development of the "quests" feature, further allowing players to earn more in game rewards for free.
Ongoing support
Blizzard provides regular bug fixes, updates, and expansions for Hearthstone. Hamilton Chu, the former executive producer for Hearthstone, stated that Blizzard intends to support the game for ten to twenty years. The principle means that additional cards have been introduced to the game are through either themed Expansions or Adventures. Blizzard had originally envisioned to release Expansions in a staged approach as to not drastically jar the player community, creating the Adventure concept for the first post-release addition with Curse of Naxxramas. The meta-game remained unpredictable for several months, helping to keep the playing community interested in playing the game and invalidating their strategies. The solo challenges on Adventure mode also served as a means to help players understand some of the stronger archetypes of card decks and learn strategies to defeat them, helping them become better players against human opponents. From 2017 in the "Year of the Mammoth", expansions focused around the new card sets, forgoing the first Adventures format, but new solo adventure types were later added.
Development of the themes and mechanics for each Expansion and Adventure are often based on the current atmosphere around the Hearthstone community, according to senior designer Mike Donais. While early expansions were based on the Warcraft franchise, the developers have been able to move away from staying with that narrative and are free to create new aspects not established by Warcraft. This idea was reflected by the dropping of the "Heroes of Warcraft" subtitle from the game's name around December 2016 to demonstrate to new players that the game was no longer tied to Warcraft.
In addition to new cards and mechanics, Blizzard has also worked to add new features to Hearthstone itself. The Tavern Brawl mode was in development for over a year before it was released in June 2015; the feature went through many iterations before the team was satisfied. Dodds equated the Tavern Brawl mode as a place to try experimental mechanics that may later be introduced to the game, as well as to offer gameplay that varies significantly from other areas of play within Hearthstone. Blizzard experimented with cross-platform play during development, having successfully played a game on PC against a player using an iPad; however, it was not a feature at launch. Cross-platform play was added in April 2014.
The introduction of the Standard vs. Wild formats in April 2016 was an issue that the developers knew since Hearthstones initial release that they would need to address; according to Brode, as new cards were introduced to the game, they recognized that new players would start to find the game inaccessible, while adjusting the balance of the meta-game of which cards from previous expansions had proven over- or underused. The ideas for how to actually implement Standard mode started about a year before its introduction. Though they will continue to design the game to maintain the appropriate balance for the Standard format, they will also monitor how future cards will impact the Wild format and make necessary changes to keep that mode entertaining. With the "Year of the Mammoth" changes to Standard, the designers opted to move some Classic cards to a new "Hall of Fame" set that is not usable in Standard. They found that these cards were often "auto-includes" for certain deck types, and created a stagnant metagame around those decks, and opted to move them out of Standard. As compensation, those that own these Hall of Fame cards received the arcane dust value of the cards they possess while still being able to use those cards in Wild. The "Hall of Fame" format also allows Blizzard to move Classic cards that have been nerfed (purposely weakened) previously to be un-nerfed and moved into the "Hall of Fame"; Blizzard found that players using Wild decks were impacted significantly by these nerfs and this approach would allow those deck formats to still thrive without disrupting Standard. To make up for cards moving out of Classic, Blizzard may consider bringing in individual cards from retired sets into the Classic set that they believe would be suitable for Standard. The associated switch of Arena mode from Wild to Standard with modified card rarity distributions with the "Year of the Mammoth" update was aimed to keep the pool of cards available to draft smaller, increasing the chances of drafting cards that they had intended to be used in synergistic combinations from the individual expansions.
In July 2019, several cards underwent artwork changes (and two were renamed) to be less graphically violent and sexualized. Lead mission designer Dave Kosak said, "It wasn't because we were looking at ratings, or international [regulations], or anything like that. We really just wanted our artists to feel good about everything in the set."
Starting in 2020 and ongoing, Blizzard started to view Hearthstone as a platform for multiple game modes rather than fixed around the main one-on-one game. Internally, multiple "strike teams" within Blizzard worked on the multiple aspects of this new approach simultaneously, with some teams working on the game modes while others work on new card and expansion ideas.
Release
Hearthstone was first announced with the subtitle Heroes of Warcraft at Penny Arcade Expo in March 2013 for Windows, Mac, and iPad, with an expected release date in the same year. Internal beta testing of the game within Blizzard began in 2012. In August 2013, the game went into closed beta, to which over one million players had been invited as of November 8, 2013, with plans to enter open beta in December. Blizzard continued closed beta into mid-January 2014 despite their original estimation. Blizzard announced open beta for North America on January 21, 2014. Open beta was announced for Europe on January 22, 2014 and on January 23, 2014, open beta was made available in all regions.
The game was released on March 11, 2014, available on Microsoft Windows and macOS operating systems. By the end of March 2014, the game had more than 10 million player accounts registered worldwide. On April 2, 2014, the game was released for iPad in Australia, Canada and New Zealand. On April 16, 2014, it was released globally for iPads. On August 6, 2014, support for Windows 8 touchscreen devices was added to the game, although not for Windows RT devices. On December 15, 2014, the game was released for Android tablets 6" or larger in Australia, Canada and New Zealand and on December 16, 2014, it was widely released for Android tablets. On April 14, 2015, the game was released for iPhone and Android smartphones worldwide. The smartphone version of the game includes new UI elements that place the player's hand on the bottom right but only half visible, so players must tap on their hand to zoom in and play cards. Single cards can also be viewed full screen by tapping and holding on a specific card, which is useful to read all the card details while using a smartphone display.
In-game promotions
To mark the release of Hearthstone, Blizzard released the Hearthsteed mount for World of Warcraft players that is obtained by winning three games in Arena or Play mode. Widely advertised on various World of Warcraft websites, this promotion encourages players to try Hearthstone, and marked the first significant crossover implemented between Blizzard games. Since then, multiple promotions have been implemented in other Blizzard titles such as Diablo III: Reaper of Souls, Heroes of the Storm, StarCraft II: Legacy of the Void and Overwatch.
An alternate hero for Shaman, Morgl the Oracle, is available through Hearthstone'''s "Recruit A Friend" program after the recruited friend reaches level 20. Players that connected their Amazon Prime subscription to Twitch Prime in late 2016 earned the alternate Priest hero Tyrande Whisperwind. Other Twitch Prime promotions have included a golden pack, which is a Classic card pack that only contains golden versions of cards, two exclusive card backs, and two Kobolds & Catacombs packs.
Since the Blackrock Mountain adventure, each expansion and adventure have introduced an exclusive card back for players that pre-ordered it.One Night in Karazhan - A Hearthstone Adventure Hearthpwn, June 29, 2016 The Boomsday Project, Rastakhan's Rumble, Rise of Shadows, Saviors of Uldum, Descent of Dragons, Ashes of Outland, Scholomance Academy, Madness at the Darkmoon Faire, and Forged in the Barrens expansions each offered an alternate hero portrait as a bonus for ordering the largest preorder bundle - Mecha-Jaraxxus for Warlock, King Rastakhan for Shaman, Madame Lazul for Priest, Elise Starseeker for Druid, Deathwing for Warrior, Lady Vashj for Shaman, Kel'Thuzad for Mage, N'Zoth for Warlock, and Hamuul Runetotem for Druid respectively.
Other media
To promote the Journey to Un'Goro set, Blizzard made a web series called "Wonders Of Un'Goro" featuring an adventurer exploring the area.
Before releasing the Knights of the Frozen Throne set, Blizzard worked with Dark Horse Comics to publish a three-issue comic book series based on the set's Death Knight theme.
To promote the Kobolds & Catacombs set, Blizzard released "The Light Candle", a live-action short spoofing films from Jim Henson of the 1980s while its characters are exploring a dungeon.
Esports
Thanks to the designers' focus on accessibility and fast-paced gameplay, Hearthstone has been the focus of a number of tournaments. Blizzard hosted an exhibition tournament in November 2013 called "The Innkeeper's Invitational" with three decks each of a different class, featuring several well-known gamers such as Dan "Artosis" Stemkoski, Octavian "Kripparrian" Morosan, Jeffrey "TrumpSC" Shih and Byron "Reckful" Bernstein. Artosis won the best-of-five tournament. Hearthstone was the focus of a number of other tournaments during its closed beta, including those hosted by Major League Gaming and ESL. In March 2014, The esports organization Tespa announced the Collegiate Hearthstone Open, a free-to-enter tournament open to all North American college students, featuring $5,000 in scholarships. Major League Gaming, ESL and the ZOTAC Cup all continue to regularly host minor Hearthstone leagues in the North American and European territories with small or no prize pools aimed at everyday players. Blizzard staff were stated to have been surprised with the game's success as an esport during its closed beta.
In April 2014, Blizzard announced the first Hearthstone World Championship would be held at BlizzCon on November 7–8. The tournament featured players from each of the game's four regions, with each region holding its own regional qualifying tournament. The Americas and Europe regions' qualifiers featured 160 players each and determined half of those players from actual in-game performance in Ranked play during the April–August seasons. The four most successful participants of each region's qualifiers went to the World Championship, for a total of 16 players. The Hearthstone World Championship 2014 featured a total prize pool of $250,000, and the American winner, James "Firebat" Kostesich, received $100,000.
The second Hearthstone World Championship was held at BlizzCon 2015 on November 7 with players selected in a similar way in the previous year and it was played in the best-of-five conquest format; the Swede winner, Sebastian "Ostkaka" Engwall, received $100,000. The third World Championship was held at BlizzCon 2016 on November 5 and the Russian winner, Pavel Beltiukov, received $250,000. It was played in a Swiss-system tournament format and one class could be banned from use by the opponent. The fourth World Championship had a $1 million prize pool and took place in January 2018; it was held in Amsterdam. The championship was moved to January to better accommodate the timing for Standard mode's yearly rotation. The Taiwanese winner, Chen "tom60229" Wei Lin, received $250,000. The fifth Hearthstone World Championship took place in April 2019 and was held in Taipei; the winner was Norwegian Casper "Hunterace" Notto that received $250,000. The first Hearthstone Grandmasters Global Finals was held at BlizzCon 2019; the Chinese winner was Xiaomeng "VKLiooon" Li that received $200,000; VKLiooon was the first woman to win the Hearthstone World Championship and to also win any BlizzCon tournament. The seventh Hearthstone World Championship winner was Japanese Kenta "Glory" Sato that received $200,000.Heartstone has also been a part of a number of esport demonstration events at international competitions, such as the 2017 Asian Indoor and Martial Arts Games and 2018 Asian Games.
ReceptionHearthstone received "universal acclaim" on iOS and "generally favorable" reviews for PC, according to review aggregator Metacritic. The game was praised for its simplicity, gameplay pace, and attention to detail along with being free-to-play, while the lack of actual card trading between players and any form of tournament mode was pointed out as the major shortcomings.
Eurogamer gave the game the perfect score of 10 and remarked that the game is "overflowing with character and imagination, feeds off and fuels a vibrant community of players and performers, and it only stands to improve as Blizzard introduces new features, an iPad version and expansions."
IGN and Game Informer both gave the game a slightly lower grade of 9/10, with IGN's Justin Davis praising the game for its "elegant simplicity of rules" and "impressive attention to detail and personality, and the true viability of playing completely for free make it easy to fall under its spell and get blissfully lost in the depths of its strategic possibilities."
GameSpot gave the game a score of 8/10, praising the game for its depth and complexity. The only major drawback noted was that the "absence of extra features hampers long-term appeal".
Later Hearthstone card expansions have also been well received. Game Informer rated the Curse of Naxxramas expansion 9/10, stating "Naxxramas is an excellent addition to the core game, and an exploration of sorts to examine the potential for additional single-player Hearthstone content [...] the adventure provides a substantial amount of new content that spills over into ranked, casual, and arena mode and changes how you approach the game." PC Gamer found that "[Curse of Naxxramas is] a much-needed and fun refresher for Blizzard’s card battler", however "the next card expansion will need to be more sizeable", rating it 78/100. Reception for Goblins vs Gnomes has also been positive, with Game Informer writing "the first expansion set for Hearthstone is a major step forward for the already accessible and fun game", and awarding it a score of 9.25/10, while Eurogamer scored it an 8/10, writing "whatever happens to Hearthstone in the future, the new content has stumbled a little by strengthening certain deck archetypes that needed no such help [...] it's re-introduced a thoughtfulness to play that's been absent for too long."
Commentators have noted that Hearthstone can suffer from "pay to win" mechanics, that those that invest monetarily into the game to get new cards and packs have generally a better chance of winning, though it is possible to be successful without spending money. Some have observed that with some of the newer expansions, with the need for strong Legendary cards to construct good decks around, one may need to spend about $50 to $100 to get the right cards to maintain many successful decks in the Standard format, belying the game's free-to-play nature. Daniel Friedman for Polygon estimated in 2017 that to maintain a complete collection would cost about $400 between booster pack purchases per year. Friedman argues the need to stay current for hard-core players is compounded by the power creep that comes with each new expansion that tends to diminish the effects of cards from older expansions. Friedman does add that this cost is less an issue since it is still possible to rank well during each season play with fundamental deck types.
Sales and playerbase
By September 2014, there were more than 20 million registered Hearthstone players and by January 2015, there were more than 25 million. As of June 2015, the active players were estimated to be about eight million PC players and nine million mobile device players, with some overlap between each group. Blizzard reported 30 million players in May 2015, 40 million in November 2015 and 50 million in April 2016. Blizzard reported it gained 20 million players over the following year, reaching 70 million unique players, and that they saw record numbers for simultaneous players during the launch of the Journey to Un'Goro expansion in April 2017. By November 2018, Blizzard stated that Hearthstone had achieved over 100 million players. In the November 2021 Year of the Phoenix Review, Blizzard reported that there were over 20 million active players in 2020.
On May 6, 2015, Activision Blizzard announced that Hearthstone and Destiny generated nearly in revenue for the company. According to SuperData Research, in June 2015 Hearthstone generated about $20 million in revenue during that month. KeyBanc Capital Markets estimates that Hearthstone generates an annual revenue of worldwide, .Hearthstone has proved to be a popular game to stream and watch on services like Twitch; Hearthstone-based streams overtook Dota 2 streams to become the third-most watched game on the platform in September 2015 and it was the fourth-most watched game in April 2016. In March 2017, Hearthstone was still the fourth-most watched game while nearly matching Dota 2s hours.
Awards
Forbes awarded Hearthstone as the best digital card game of 2013. At The Game Awards 2014, Hearthstone was awarded the best mobile/handheld game. In December 2014, GameSpot awarded Hearthstone with mobile game of the year. GameTrailers awarded Hearthstone with multiplayer game of the year and best overall game of 2014. At the 18th Annual DICE Awards, Hearthstone was awarded with "Mobile Game of the Year" and "Strategy/Simulation Game of the Year", as well as nominations for "Game of the Year", "Outstanding Achievement in Game Design", "Outstanding Innovation in Gaming", and "Outstanding Achievement in Online Gameplay". At the 2014 BAFTA Awards, Hearthstone won best multiplayer game. At the 2014 NAVGTR Awards Hearthstone won the Game, Strategy (Dan Elggren) award. The One Night in Karazhan expansion pack won the award each for "Best Handheld Audio" and "Best Sound Design in a Casual/Social Game" at the 15th Annual Game Audio Network Guild Awards, whereas its other nomination was for "Best Music in a Casual/Social Game". In 2018, the Kobolds & Catacombs expansion pack was nominated for "Best Sound Design in a Casual/Social Game", while the game itself won the award for "Best Original Song" with "Hearth and Home" at the 16th Annual Game Audio Network Guild Awards. In 2019, The Boomsday Project'' won the awards for "Best Music in a Casual/Social Game" and "Best Sound Design in a Casual/Social Game" at the 17th Annual Game Audio Network Guild Awards.
Notes
References
External links
2014 video games
Android (operating system) games
Digital collectible card games
Free-to-play video games
IOS games
MacOS games
Video games containing loot boxes
Video games developed in the United States
Video games scored by Peter McConnell
Video games with cross-platform play
Windows games |
34827396 | https://en.wikipedia.org/wiki/Asus%20Transformer%20Pad%20TF300T | Asus Transformer Pad TF300T | The Asus Transformer Pad TF300T is a 2-in-1 detachable tablet from the Asus Transformer Pad series. It runs Android, has a quad-core processor, and a successor to Asus Eee Pad Transformer Prime. The Transformer design includes an optional docking keyboard. The Asus Transformer Pad TF300T was released on the market in the U.S. and Europe in May 2012.
Features
The Asus Transformer Pad TF300T is a tablet computer with a LED 10.1" IPS 10-finger multi-touch screen with a resolution of 1280x800. This display is not Super IPS+ contrary to the ASUS Eee Pad Transformer Prime. The unit does not employ Gorilla Glass, and is therefore more susceptible to breakage.
The processor is an Nvidia Tegra 3 T30L at 1.2 GHz upon initial release with Android 4.0.X ICS, but overclocked to 1.3 GHz upon updating to Android 4.1 Jelly Bean (latest firmware is Android 4.2.1). The Transformer Pad TF300T has 1 GB of DDR3 SDRAM, and an 802.11b/g/n Wi-Fi module.
At the front of the tablet there is a 1.2-megapixel camera for video conferencing. On the back is an 8-megapixel (5-element lens) BSI CMOS sensor with autofocus camera which can be used for capturing HD videos with 1080p resolution.
The TF300 was the first 'non-Nexus' device to receive Android 4.2 'Jelly Bean'; but that was also the last update ever received as Asus abandoned it shortly thereafter.
Docking keyboard
The optional docking keyboard features full QWERTY keys, touchpad as well as an additional battery that increases overall battery life from 8.5 hours to up to 15 hours.
There are multiple reports of the screen cracking due to the amount of stress the hinges puts on the screen when opening and closing the unit.
Weight: 640g without the keyboard. The weight of the docking keyboard is 546g.
3G and LTE models
A 3G model supporting HSPA+, HSDPA and quad-band GSM has been released as the TF300TG.
An LTE model supporting the above cellular standards as well as LTE has been released as the TF300TL.
Custom ROM Development
CyanogenMod 11 or later, an unofficial updated version of Android (operating system) can be installed onto the TF300T tablet. In addition, the TF300 continues to be supported by an active community of aftermarket operating system developers at XDA Developers several years after its initial release.
Reception
The Verge noted the good performance, build quality and battery life. They also noted that Android 4.0 has some issues, the keyboard is cramped and that the display isn't particularly special. Anandtech noted that it is a good successor to the earlier version.
See also
Comparison of tablet computers
Android version history
References
External links
Asus Transform Pad at ASUS website
Asus products
Tablet computers
Android (operating system) devices
Tablet computers introduced in 2012 |
5275940 | https://en.wikipedia.org/wiki/Login%20session | Login session | In computing, a login session is the period of activity between a user logging in and logging out of a (multi-user) system.
On Unix and Unix-like operating systems, a login session takes one of two main forms:
When a textual user interface is used, a login session is represented as a kernel session — a collection of process groups with the logout action managed by a session leader.
Where an X display manager is employed, a login session is considered to be the lifetime of a designated user process that the display manager invokes.
On Windows NT-based systems, login sessions are maintained by the kernel and control of them is within the purview of the Local Security Authority Subsystem Service (LSA). winlogon responds to the secure attention key, it requests the LSA to create login sessions on login, and terminates all of the processes belonging to a login session on logout.
See also
Windows NT Startup Process
Architecture of the Windows NT operating system line
Booting
Master boot record
Power-on self test
Windows Vista Startup Process
BootVis
Further reading
Operating system technology |
10931 | https://en.wikipedia.org/wiki/Finite-state%20machine | Finite-state machine | A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. An FSM is defined by a list of its states, its initial state, and the inputs that trigger each transition. Finite-state machines are of two types—deterministic finite-state machines and non-deterministic finite-state machines. A deterministic finite-state machine can be constructed equivalent to any non-deterministic one.
The behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which dispense products when the proper combination of coins is deposited, elevators, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, and combination locks, which require the input of a sequence of numbers in the proper order.
The finite-state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but an FSM cannot. This is because an FSM's memory is limited by the number of states it has. A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. FSMs are studied in the more general field of automata theory.
Example: coin-operated turnstile
An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted.
Considered as a state machine, the turnstile has two possible states: Locked and Unlocked. There are two possible inputs that affect its state: putting a coin in the slot (coin) and pushing the arm (push). In the locked state, pushing on the arm has no effect; no matter how many times the input push is given, it stays in the locked state. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect; that is, giving additional coin inputs does not change the state. However, a customer pushing through the arms, giving a push input, shifts the state back to Locked.
The turnstile state machine can be represented by a state-transition table, showing for each possible state, the transitions between them (based upon the inputs given to the machine) and the outputs resulting from each input:
{| class="wikitable"
! Current State
! Input
! Next State
! Output
|-
! rowspan="2"|Locked
| coin || Unlocked || Unlocks the turnstile so that the customer can push through.
|-
| push || Locked || None
|-
! rowspan="2"|Unlocked
| coin || Unlocked || None
|-
| push || Locked || When the customer has pushed through, locks the turnstile.
|}
The turnstile state machine can also be represented by a directed graph called a state diagram (above). Each state is represented by a node (circle). Edges (arrows) show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition. An input that doesn't cause a change of state (such as a coin input in the Unlocked state) is represented by a circular arrow returning to the original state. The arrow into the Locked node from the black dot indicates it is the initial state.
Concepts and terminology
A state is a description of the status of a system that is waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received.
For example, when using an audio system to listen to the radio (the system is in the "radio" state), receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state.
In some finite-state machine representations, it is also possible to associate actions with a state:
an entry action: performed when entering the state, and
an exit action: performed when exiting the state.
Representations
State/Event table
Several state-transition table types are used. The most common representation is shown below: the combination of current state (e.g. B) and input (e.g. Y) shows the next state (e.g. C). The complete action's information is not directly described in the table and can only be added using footnotes. An FSM definition including the full action's information is possible using state tables (see also virtual finite-state machine).
UML state machines
The Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite-state machines while retaining their main benefits. UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of both Mealy machines and Moore machines. They support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines.
SDL state machines
The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition:
send an event
receive an event
start a timer
cancel a timer
start another concurrent state machine
decision
SDL embeds basic data types called "Abstract Data Types", an action language, and an execution semantic in order to make the finite-state machine executable.
Other state diagrams
There are a large number of variants to represent an FSM such as the one in figure 3.
Usage
In addition to their use in modeling reactive systems presented here, finite-state machines are significant in many different areas, including electrical engineering, linguistics, computer science, philosophy, biology, mathematics, video game programming, and logic. Finite-state machines are a class of automata studied in automata theory and the theory of computation.
In computer science, finite-state machines are widely used in modeling of application behavior, design of hardware digital systems, software engineering, compilers, network protocols, and the study of computation and languages.
Classification
Finite-state machines can be subdivided into acceptors, classifiers, transducers and sequencers.
Acceptors
Acceptors (also called detectors or recognizers) produce binary output, indicating whether or not the received input is accepted. Each state of an acceptor is either accepting or non accepting. Once all input has been received, if the current state is an accepting state, the input is accepted; otherwise it is rejected. As a rule, input is a sequence of symbols (characters); actions are not used. The start state can also be an accepting state, in which case the acceptor accepts the empty string. The example in figure 4 shows an acceptor that accepts the string "nice". In this acceptor, the only accepting state is state 7.
A (possibly infinite) set of symbol sequences, called a formal language, is a regular language if there is some acceptor that accepts exactly that set. For example, the set of binary strings with an even number of zeroes is a regular language (cf. Fig. 5), while the set of all strings whose length is a prime number is not.
An acceptor could also be described as defining a language that would contain every string accepted by the acceptor but none of the rejected ones; that language is accepted by the acceptor. By definition, the languages accepted by acceptors are the regular languages.
The problem of determining the language accepted by a given acceptor is an instance of the algebraic path problem—itself a generalization of the shortest path problem to graphs with edges weighted by the elements of an (arbitrary) semiring.
An example of an accepting state appears in Fig. 5: a deterministic finite automaton (DFA) that detects whether the binary input string contains an even number of 0s.
S1 (which is also the start state) indicates the state at which an even number of 0s has been input. S1 is therefore an accepting state. This acceptor will finish in an accept state, if the binary string contains an even number of 0s (including any binary string containing no 0s). Examples of strings accepted by this acceptor are ε (the empty string), 1, 11, 11..., 00, 010, 1010, 10110, etc.
Classifiers
Classifiers are a generalization of acceptors that produce n-ary output where n is strictly greater than two.
Transducers
Transducers produce output based on a given input and/or a state using actions. They are used for control applications and in the field of computational linguistics.
In control applications, two types are distinguished:
Moore machine The FSM uses only entry actions, i.e., output depends only on state. The advantage of the Moore model is a simplification of the behaviour. Consider an elevator door. The state machine recognizes two commands: "command_open" and "command_close", which trigger state changes. The entry action (E:) in state "Opening" starts a motor opening the door, the entry action in state "Closing" starts a motor in the other direction closing the door. States "Opened" and "Closed" stop the motor when fully opened or closed. They signal to the outside world (e.g., to other state machines) the situation: "door is open" or "door is closed".
Mealy machine The FSM also uses input actions, i.e., output depends on input and state. The use of a Mealy FSM leads often to a reduction of the number of states. The example in figure 7 shows a Mealy FSM implementing the same behaviour as in the Moore example (the behaviour depends on the implemented FSM execution model and will work, e.g., for virtual FSM but not for event-driven FSM). There are two input actions (I:): "start motor to close the door if command_close arrives" and "start motor in the other direction to open the door if command_open arrives". The "opening" and "closing" intermediate states are not shown.
Sequencers
Sequencers (also called generators) are a subclass of acceptors and transducers that have a single-letter input alphabet. They produce only one sequence which can be seen as an output sequence of acceptor or transducer outputs.
Determinism
A further distinction is between deterministic (DFA) and non-deterministic (NFA, GNFA) automata. In a deterministic automaton, every state has exactly one transition for each possible input. In a non-deterministic automaton, an input can lead to one, more than one, or no transition for a given state. The powerset construction algorithm can transform any nondeterministic automaton into a (usually more complex) deterministic automaton with identical functionality.
A finite-state machine with only one state is called a "combinatorial FSM". It only allows actions upon transition into a state. This concept is useful in cases where a number of finite-state machines are required to work together, and when it is convenient to consider a purely combinatorial part as a form of FSM to suit the design tools.
Alternative semantics
There are other sets of semantics available to represent state machines. For example, there are tools for modeling and designing logic for embedded controllers. They combine hierarchical state machines (which usually have more than one current state), flow graphs, and truth tables into one language, resulting in a different formalism and set of semantics. These charts, like Harel's original state machines, support hierarchically nested states, orthogonal regions, state actions, and transition actions.
Mathematical model
In accordance with the general classification, the following formal definitions are found.
A deterministic finite-state machine or deterministic finite-state acceptor is a quintuple , where:
is the input alphabet (a finite non-empty set of symbols);
is a finite non-empty set of states;
is an initial state, an element of ;
is the state-transition function: (in a nondeterministic finite automaton it would be , i.e. would return a set of states);
is the set of final states, a (possibly empty) subset of .
For both deterministic and non-deterministic FSMs, it is conventional to allow to be a partial function, i.e. does not have to be defined for every combination of and . If an FSM is in a state , the next symbol is and is not defined, then can announce an error (i.e. reject the input). This is useful in definitions of general state machines, but less useful when transforming the machine. Some algorithms in their default form may require total functions.
A finite-state machine has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite-state machine is accepted by such a kind of restricted Turing machine, and vice versa.
A finite-state transducer is a sextuple , where:
is the input alphabet (a finite non-empty set of symbols);
is the output alphabet (a finite non-empty set of symbols);
is a finite non-empty set of states;
is the initial state, an element of ;
is the state-transition function: ;
is the output function.
If the output function depends on the state and input symbol () that definition corresponds to the Mealy model, and can be modelled as a Mealy machine. If the output function depends only on the state () that definition corresponds to the Moore model, and can be modelled as a Moore machine. A finite-state machine with no output function at all is known as a semiautomaton or transition system.
If we disregard the first output symbol of a Moore machine, , then it can be readily converted to an output-equivalent Mealy machine by setting the output function of every Mealy transition (i.e. labeling every edge) with the output symbol given of the destination Moore state. The converse transformation is less straightforward because a Mealy machine state may have different output labels on its incoming transitions (edges). Every such state needs to be split in multiple Moore machine states, one for every incident output symbol.
Optimization
Optimizing an FSM means finding a machine with the minimum number of states that performs the same function. The fastest known algorithm doing this is the Hopcroft minimization algorithm. Other techniques include using an implication table, or the Moore reduction procedure. Additionally, acyclic FSAs can be minimized in linear time.
Implementation
Hardware applications
In a digital circuit, an FSM may be built using a programmable logic device, a programmable logic controller, logic gates and flip flops or relays. More specifically, a hardware implementation requires a register to store state variables, a block of combinational logic that determines the state transition, and a second block of combinational logic that determines the output of an FSM. One of the classic hardware implementations is the Richards controller.
In a Medvedev machine, the output is directly connected to the state flip-flops minimizing the time delay between flip-flops and output.
Through state encoding for low power state machines may be optimized to minimize power consumption.
Software applications
The following concepts are commonly used to build software applications with finite-state machines:
Automata-based programming
Event-driven finite-state machine
Virtual finite-state machine
State design pattern
Finite-state machines and compilers
Finite automata are often used in the frontend of programming language compilers. Such a frontend may comprise several finite-state machines that implement a lexical analyzer and a parser.
Starting from a sequence of characters, the lexical analyzer builds a sequence of language tokens (such as reserved words, literals, and identifiers) from which the parser builds a syntax tree. The lexical analyzer and the parser handle the regular and context-free parts of the programming language's grammar.
See also
Abstract state machines (ASM)
Artificial intelligence (AI)
Abstract State Machine Language (AsmL)
Behavior model
Communicating finite-state machine
Control system
Control table
Decision tables
DEVS: Discrete Event System Specification
Extended finite-state machine (EFSM)
Finite-state machine with datapath
Gezel
Hidden Markov model
Homing sequence
Low-power FSM synthesis
Petri net
Pushdown automaton
Quantum finite automata (QFA)
Recognizable language
SCXML
Semiautomaton
Semigroup action
Sequential logic
Specification and Description Language
State diagram
State pattern
Synchronizing sequence
Transformation monoid
Transition monoid
Transition system
Tree automaton
Turing machine
UML state machine
Unique input/output sequence (UIO)
References
Further reading
General
Wagner, F., "Modeling Software with Finite State Machines: A Practical Approach", Auerbach Publications, 2006, .
ITU-T, Recommendation Z.100 Specification and Description Language (SDL)
Samek, M., Practical Statecharts in C/C++, CMP Books, 2002, .
Samek, M., Practical UML Statecharts in C/C++, 2nd Edition, Newnes, 2008, .
Gardner, T., Advanced State Management, 2007
Cassandras, C., Lafortune, S., "Introduction to Discrete Event Systems". Kluwer, 1999, .
Timothy Kam, Synthesis of Finite State Machines: Functional Optimization. Kluwer Academic Publishers, Boston 1997,
Tiziano Villa, Synthesis of Finite State Machines: Logic Optimization. Kluwer Academic Publishers, Boston 1997,
Carroll, J., Long, D., Theory of Finite Automata with an Introduction to Formal Languages. Prentice Hall, Englewood Cliffs, 1989.
Kohavi, Z., Switching and Finite Automata Theory. McGraw-Hill, 1978.
Gill, A., Introduction to the Theory of Finite-state Machines. McGraw-Hill, 1962.
Ginsburg, S., An Introduction to Mathematical Machine Theory. Addison-Wesley, 1962.
Finite-state machines (automata theory) in theoretical computer science
Abstract state machines in theoretical computer science
Machine learning using finite-state algorithms
Hardware engineering: state minimization and synthesis of sequential circuits
Finite Markov chain processes
"We may think of a Markov chain as a process that moves successively through a set of states s1, s2, …, sr. … if it is in state si it moves on to the next stop to state sj with probability pij. These probabilities can be exhibited in the form of a transition matrix" (Kemeny (1959), p. 384)
Finite Markov-chain processes are also known as subshifts of finite type.
Chapter 6 "Finite Markov Chains".
External links
Modeling a Simple AI behavior using a Finite State Machine Example of usage in Video Games
Free On-Line Dictionary of Computing description of Finite-State Machines
NIST Dictionary of Algorithms and Data Structures description of Finite-State Machines
A brief overview of state machine types, comparing theoretical aspects of Mealy, Moore, Harel & UML state machines. |
51792407 | https://en.wikipedia.org/wiki/The%20Jackbox%20Party%20Pack | The Jackbox Party Pack | The Jackbox Party Pack is a series of party video games developed by Jackbox Games for many different platforms on a near-annual release schedule since 2014. Each installation contains five games that are designed to be played in large groups, including in conjunction with streaming services like Twitch which provide means for audiences to participate.
History
Jellyvision had been well-established for its You Don't Know Jack series of "irreverent trivia" games. Though the series had been successful in the late 1990s, Jellyvision had not been able to make the transition easily from computer to home console games, and by 2001, all but six employees of Jellyvision had been laid off. The company focused on developing business solution software, specifically offering software to its clients to help assist their customers for complex forms or other types of support.
By 2008, Jellyvision, now named The Jellyvision Lab, saw that mobile gaming was booming, so it created a small subsidiary, Jellyvision Games, to rework You Don't Know Jack, first for consoles in its 2011 version, then for mobile and Facebook users with the now-defunct 2012 iteration. This last version was a critical success, and led the studio to focus on developing similar games, rebranding the studio by 2013 as Jackbox Games.
Among its one-off games including Lie Swatter, Clone Booth, and Word Puttz, generally designed as single player games or played asynchronously with other players. One key game that followed this was its 2014 game Fibbage, which allows up to eight simultaneous players, one of whom can use live streaming or play with people in the same room. Other players would participate by using a web browser or mobile device to connect to the streaming player's game through Jackbox's servers and which to provide their answers.
With the success of Fibbage, Jackbox Games decided that they could offer these games in packs, reworking older games to use the streaming capabilities and adding in new games. This formed the basis of the Jackbox Party Pack, with the first pack released in 2014 including updated versions of You Don't Know Jack, Fibbage, a reworked version of Lie Swatter for its multiplayer approach, and two new games. The company saw this as a new development model that allowed them to provide new packs on an annual basis, play around with different game formats, and provide higher value to consumers over one-off games.
Subsequent Jackbox Party Packs have included improvements of existing games, support for more players including the addition of audience participation through the same connectivity approach, better support for content management for streams (as to remove offensive terms in responses, for example), and the ability to create custom games. A key part of Party Pack games is to streamline the ability for players to get into games, and according to Jackbox Games' CEO Mike Bilder, they spent about a year working on building their servers and software to provide a flexible architecture for the player-side mobile and web interface to expand for any of the games, and to avoid having players download any type of app to get started.
According to Allard Laban, creative chief for both Jellyvision Labs and Jackbox Games, they select games to include in the packs through a combination of allowing the team to submit fleshed-out ideas, and through testing various ideas through pen-and-paper trials; Laban stated that for Party Pack 4, they had over fifty play-tested concepts which they narrowed down to four new games, rounding out the package with an improved version of Fibbage. Some games, such as Fakin' It, took multiple years to get the right gameplay and mechanics down to make it an appropriate game for inclusion.
The first six Jackbox Party Packs gained renewed attention during the COVID-19 pandemic as a way for many people to keep up social interactions while maintaining social distancing. Starting on May 1, 2020, Jackbox ran ten special Celebrity Jackbox live streams to support COVID-19 charities, with the celebrities playing various Jackbox Party Pack games alongside audience viewers. Jackbox said that its playerbase doubled from 100 million players in 2019 to 200 million by October 2020 due to society's shutdown. Jackbox Games improved server capacity and streaming service usability, and internationalized a standalone version of Quiplash 2 InterLASHional for French, German, Italian, and Spanish languages.
Jackbox released a Twitch extension for streamers in December 2020 which allows viewers of their channel to directly participate in Jackbox games from the Twitch interface.
Gameplay
Most games in The Jackbox Party Pack are designed for online play, requiring only one person to own and launch the game. Remaining players can be local and thus see the game via the first player's computer or console, or can be remote, watching the game be played through streaming media services. All players – whether local or remote – use either web-enabled devices, including personal computers and mobile or tablet devices, to enter a provided "room code" at Jackbox's dedicated servers to enter the game, or can use a Twitch extension controlled by the streamer to let viewers play directly via the Twitch viewer. Games are generally limited to 4-8 active players, but any other players connecting to the room after these players are connected become audience participants, who can impact how scoring is determined and influence the winner.
Each game generally has a period where all players are given a type of prompt. This prompt appears on the individual devices and gives players sufficient time to enter their reply or draw as necessary, and can be set to account for forced streaming delays that some streaming services require. The game then collects and processes all the replies, and frequently then gives players a chance to vote for the best answer or drawing; this is often where the audience may also participate by voting as a group. Games proceed for a number of rounds, and a winner, generally with the highest score at the end, is announced.
Seven of the eight games are developed with a default ESRB Teen rating, with a family-friendly option to censor certain questions and player input, however Party Pack 8 has an ESRB Everybody 10+ rating.
Games
Consoles/Computers
All Packs are available on PlayStation 4, Nintendo Switch, Xbox One, Microsoft Windows, MacOS, Linux, Apple TV, iPad, Android TV, Amazon Fire TV, Nvidia Shield TV, and Xfinity X1. Party Pack 1 and Party Pack 2 are available on PlayStation 3. Party Pack 8 is available on PlayStation 5 and Xbox Series X and Series S.
The Jackbox Party Pack (2014)
The Jackbox Party Pack was released on PlayStation 3, PlayStation 4, and Xbox One on November 19, 2014, and for Microsoft Windows on November 26, 2014. The Xbox 360 version was released on November 3, 2015, alongside retail editions for these console platforms published by Telltale Games. The Nintendo Switch version was released on August 17, 2017.
You Don't Know Jack 2015 is for 1-4 players and is based on the standard format for You Don't Know Jack games. Up to four players are tasked to answer multiple choice trivia questions presented obscurely in the game's "high culture meets pop culture" format. Players earn in-game money for answering correctly in a shorter amount of time and lose money for wrong answers. Failing to answer doesn't give a player any money. Multiplayer games also feature "screws", where one player can force another player to answer immediately and can earn a bonus if the "screwed" player answers incorrectly or fails to answer. The player with the most money at the end wins.
Drawful is for 3-8 players and is a drawing game. Each round starts with each player individually being given a playful phrase and a drawing canvas on their local device. They have a short amount of time to draw out that phrase. Following this, each picture is presented to all players, and the players except for the artist must enter a phrase they think the picture represents. Then, all those replies, along with the actual phrase for that picture, are presented to the players to make their vote of what they think the original phrase was. The artist of the picture gets points for every other player that guessed their original phrase, while those who wrote other phrases get points for votes their phrase gets. If a player selects a decoy answer, they don't get any points. The player with the most points at the end wins.
Word Spud is for 2-8 players and is a word association game. A word is presented and one player, at a time, comes up with a word that is associated with it. The remaining players vote if the association is good or not and the player who came up with the association gets points if it gets the most likes and loses points if it gets the most dislikes. From there, the next player starts from the new word to come up with a new association, and the game continues. The player with the most points at the end wins.
Lie Swatter is for 1-100 players and is a multiplayer version of the single-player mobile app that Jackbox Games released prior to The Jackbox Party Pack. The game challenges up to 100 players to correctly guess if presented trivia statements are true or not, "swatting" those that are false. Players earn points for correct answers with the fastest player earning additional points. The player with the most points at the end wins.
Fibbage XL is for 2-8 players and is an expansion of the standalone game that Jackbox Games released prior to the pack with new sets of questions. In the first two rounds of the game, each player selects from one of five random categories, and an obscure fact is presented to all players with a missing word or phrase to complete it. Each player uses their local device to enter a reply for those missing words; if they enter the actual right answer, they are asked to enter something different, and if they can't enter an answer before the timer runs out, they can press the "lie for me" button and get to choose between two game-generated choices. Then, the game presents all replies, including the correct one, to the players, who then select what they think is the right answer. Players score points for selecting the right answer, but can also score if other players select their reply, so players are encouraged to provide seemingly correct answers for their replies. Players lose points for selecting the false answer the game wrote itself. In the final round, "The Final Fibbage", one more question is provided for all of the players to answer. The player with the most points at the end wins.
The Jackbox Party Pack 2 (2015)
The Jackbox Party Pack 2 was released for Microsoft Windows, PlayStation 3, PlayStation 4, and Xbox One on October 13, 2015. The Nintendo Switch version was released on August 17, 2017.
Fibbage 2 is for 2-8 players. As compared to its predecessor, Fibbage 2 introduces new sets of questions and the ability for the audience to vote on answers which can provide an extra scoring boost to the players. A new option called the Defibrillator permits players to delete all of the answers except one and the truth of the selection for one question.
Earwax is for 3-8 players. In each round, one player is selected as the judge and is given a choice of five prompts. The prompt is presented to the other players, and these players are each given six random sound effects. Each player then selects two of the sound effects, in order, as a reply to the prompt. The judge player selects which combined sounds make the most humorous or fitting answer, and that selected player earns a point. The first player to earn three points wins.
Bidiots is for 3-6 players. It is a spiritual successor to Drawful. Players start by drawing images for randomly-assigned categories. Players then use in-game money to bid on these images as if at an art auction, trying to be the highest bidder for the image that matches specific categories, if they bid until they couldn't anymore, they get the image and the artist of the image earns money. Players can use screws (similar to the You Don't Know Jack franchise) to force other players to bid, and if players run out of money, they can take out a predatory loan to try to compete through the rest of the game. After the number of lots allotted (8 for 3 players, 10 for 4 players, 12 for 5-6 players) the images the player bought give them more money. The player with the most money at the end wins, unless one or more players take out three predatory loans, which make them lose money.
Quiplash XL is for 3-8 players. Jackbox Games released it as a standalone game prior to the pack, and it was included in this pack's release along with previous DLC (Quip Pack 1) and "over 100 brand new prompts". In the game's first two rounds, each player is given two prompts to provide an answer to; the prompts are given so that two players see each prompt. Players provide what they believe is a funny answer to each prompt. Then, all players and the audience are shown a prompt and the two answers provided. They vote for the answer they think is the best quip. Points are gained by the percentage of votes between the two players, and bonus points are awarded from a possible "quiplash" if they get all the votes. Entering the same thing as an opponent in a prompt doesn't award any points. In the final round, "The Last Lash," all players respond to the same prompt, and vote three times for the best answers of those presented. The player with the most points at the end wins.
Bomb Corp. is for 1-4 players. One player is an employee of a bomb factory that must deactivate inadvertently-started bombs as they come off the assembly lines, while other players are employees that are given different sets of instructions to help deactivate it. The instructions are specifically obtuse and potentially conflicting, requiring careful communication between players.
The Jackbox Party Pack 3 (2016)
The Jackbox Party Pack 3 was released during the week of October 18, 2016 for Microsoft Windows, macOS, PlayStation 4, Xbox One, certain Android devices, and Apple TV. It was subsequently released on the Nintendo Switch on April 13, 2017. A version for Xfinity's X1 set-top box was available in January 2018.
Quiplash 2 is for 3-8 players. As compared to its predecessor, Quiplash 2 introduces new prompts, the ability for the hosting player to create new prompts, the ability for the host to censor players, the "safety quip" feature that incorporates the ability for the player to have a quip written for them, and new "Last Lash" rounds that either requires players to come up with a meaning of a given acronym, complete a caption in a comic strip, or come up with something clever using a given word in a prompt; unlike the previous game's final round, medals determine the points distributed to the players.
Trivia Murder Party is for 1-8 players and has a lighthearted theme of a horror thriller (similar to the Saw franchise). Each round includes a multiple-choice trivia question, with players earning in-game money for being correct, and then a subsequent "Killing Floor" mini-game if any "living" player got the question wrong. The mini-game may cost the lives of one or more remaining players, who then otherwise continue in the game as ghosts. The endgame starts when only one player remains alive, where all players now try to escape along a darkening hallway: each question provides three possible answers to a category, and each player determines which answers belong to it; the leading player only sees two answers, giving trailing players the opportunities to take the lead and escape first. After nine questions are survived, a loser wheel minigame is drawn if there is more than one player alive before the endgame. If only one player remains alive after all other players died in 5 or fewer questions, the player has to survive two more to make it to the endgame. But if the last player dies after not successfully surviving in a minigame, it is "game over" for everyone.
Guesspionage is for 2-8 players and is a percentage-guessing game. In the first two rounds, each player in turn, guesses what percentage of people have a certain quality or do a certain activity, such as texting while driving. If there are more than 5 audience members, they are surveyed prior to the turns to get these percentages, otherwise earlier survey results by Jackbox Games are used. Once the current player makes their guess, the other active players can consider if they are higher or lower than the guessed value, including opining if they are off by more than a certain amount. Points are scored by the current player based on how close they are and by the other players based on the guessing of higher or lower. Points are also scored by the current player that guessed the exact percentage. In the final round, one question with 9 choices is given, and the players all have to pick what they think are the three most popular answers, with points awarded based on the answer's popularity. The player with the most points at the end wins.
Fakin' It is for 3-6 players and is a local multiplayer game where each player has their own connected device. In each round, one player is randomly selected to be the Faker, and all players except the Faker are given instructions that involve some type of physical action, such as raising a hand or making a face; the Faker is not given this information but instead must figure out from the other players what to do. Each player then attempts to guess who the Faker was by their actions, with the round ending if the Faker is guessed correctly by all other players, or successfully escaping, after which points are awarded for if at least one player guesses the Faker correctly, everyone guesses correctly, and/or if the Faker escapes capture in each task out of the number allotted (3 for 4-6 players, 2 for 3 players). After the first round, players may select any action they like. The final round is always "Text You Up", where each player answers a number of open-ended questions, while the Faker is given different questions which can have overlapping answers with the questions given to the players. (For example, the other players may be asked about a positive trait about themselves, while the Faker would be asked what traits they would look for in a companion.) The player with the most points at the end wins.
Tee K.O. is for 3-8 players and is a drawing-based game. Each player starts by drawing three images of anything they want, though the game provides suggestions to help. Then each player has a chance to enter several short sayings or slogans. Subsequently, each player is then given two or more random drawings and two or more random sayings, and selects the pair that best fits together as printed on a T-shirt. These designs are then put into a one-on-one voting battle with all players and audience members to determine the best-voted T-shirt design and the design that had the longest voting streak. If the T-shirt gets all votes, the player who drew the shirt gets a "shirtality". A second round of drawing, slogan writing, pairing, and voting is performed. The winning designs from each round are then put against each other to determine the ultimate winning design. After the game, players are able to order custom printed T-shirts.
The Jackbox Party Pack 4 (2017)
The Jackbox Party Pack 4 was released during the week of October 17, 2017, for Microsoft Windows, macOS, PlayStation 4, Xbox One, Nintendo Switch, various Android devices, and Apple TV. A version for Xfinity's X1 set-top box was available in January 2018.
Fibbage 3 is for 2-8 players and is the third game in the Fibbage series. The game includes new interactivity with the audience by letting them add their own lies to the selection and new "Final Fibbage" facts with two missing words or phrases instead of one. It has a new separate game mode called Fibbage: Enough About You that is for 3-8 players and replaces the game's traditional questions with questions relating to the players. In the first round of this mode, players start writing a truth about themselves and then the other players have to write a lie and then find the truth about one of the players. The player who wrote the truth about themselves earns points for every other player that guessed it correctly. In the final round of this mode, all players have to write a truth and a lie about themselves and then the other players have to find which statement is true about the player.
Survive the Internet is for 3-8 players and is a game of user-generated content that takes place on a fictional version of the Internet. In the first three rounds, one player receives a question that asks their opinion on a topic. Their answer is taken out of context and sent to another player, who is then told to determine what the reply was in response to as if they were on a specific site such as social media, forums, jobs and news, attempting to twist the reply as best they can to make the first player look bad. All players and the audience are then presented with the pairs of original replies and the guessed topic, and vote on which pairing is the most ridiculous. Each vote gains a big number of points for the second player that twisted the reply and a smaller number of points for the first player that provided the reply. If the pair of replies and the guessed topic gets the most votes, the second player that twisted the reply gets a "best burn" and a bigger number of bonus points and the first player that provided the reply gets an "ultimate sacrifice" and smaller number of bonus points. The final round is always the "Photosharing site", where players are given a question with two choices and the photo based on their selected choice is sent to another player who has to comment about it. The player with the most points at the end wins, having "survived the Internet".
Monster Seeking Monster is for 3-7 players and has a horror theme where each player is a disguised monster attempting to date other players. In each of the six rounds, players start by sending up to four messages to other players; the non-playable monster, the robot, if the audience is turned off for 3-4 player games, generates messages right after a player texts to it, the audience, if turned on and participating, uses mad lib-style prompts to select phrases to send. Following this, each player selects one other player they would date based on those replies. If two players selected each other, they both earn a heart. Additional scoring bonuses and effects due to the hidden monster power are also accounted for. A rejection happens when two players select other players instead of each other, or if one or more players don't select anyone (they also lose a heart if they don't select any players to date). From the end of the second round on, the monster form of the leading player whose monster form is yet unknown, is revealed to all. The player with the most hearts at the end wins, unless other special conditions are met relating to the player's monster.
Bracketeering is for 3-16 players and is a tournament-style game for up to sixteen players, played across three rounds. In the first round, players are presented with a prompt to complete with the best or funniest answers they can. (The number of answers allotted are 2 for 3-4 players and 1 for 5-16 players.) These answers are randomly placed on a tournament-style grid (8 for 3-8 players and 16 for 9-16 players). The players are then given one of the tournament matchups and predict which answer will win that matchup. Subsequently, each match is then presented to all players and the audience. The answer that gets the highest percentage of votes wins, with the percentage that it wins by tied to how much in-game money those players that guessed that match correctly get. If two, three or four answers have the same number of votes, all players have to tap on their device as fast as they can to cheer for their answer. Subsequent matchups use these best answers going forward. After the final matchup, the player that provided the winning reply gets an additional cash bonus. The second round is a "blind bracket" where the players are presented with a prompt, but the brackets are based on a different, related prompt using those answers. The final round is a "triple blind bracket" where the prompt at each level of the bracket changes. The player with the most money at the end wins.
Civic Doodle is for 3-8 players and is an art game similar to Drawful and Bidiots with two players drawing the same piece of art simultaneously. In the first two rounds, a start of a doodle is presented to two randomly selected players, and they have a short time to draw atop that; this is done in real-time allowing the other players and the audience to provide feedback on either drawing in the form of preselected emoji. After the timer is done, the players and audience vote for which drawing is better, with points awarded to both players based on how many votes they received, as well as an additional point bonus based on the emoji votes. Subsequently, two more players then draw atop the highest-voted picture. After a number of matchups, depending on how many players are in the game, not counting the audience members, players have to suggest a title for the highest-voted picture. Following this, the players and the audience vote for their favorite title. The final round has all players given a title and a start of a doodle and they have to draw the features the game requested. The player with the most points at the end wins. After the game, players can do a free play draw or order custom merch.
The Jackbox Party Pack 5 (2018)
The Jackbox Party Pack 5 was released on October 17, 2018.
You Don't Know Jack: Full Stream is for 1-8 players and is the newest iteration of the You Don't Know Jack franchise. The game is updated to feature similar streaming-friendly features as most other Party Pack games. This includes support for up to eight players and an audience. As the game now uses both mobile devices and computers as controllers, text-based questions like the "Gibberish Question" return, new and classic question types are present.
Split the Room is for 3-8 players and is a scenario game. In the first two rounds, players are presented with a hypothetical solution with a fill-in-the-blank component. Players then try to fill in the blank such that when the question is presented to the other players, the yes or no responses will "split the room", with more points for an equal division of answers. The final round, which is always the "Decisive Dimension", gives prompts with two options where the first is already completed. Players complete the second answer and everyone else picks the option. The player with the most points at the end wins.
Mad Verse City is for 3-8 players and has players use giant robots to out-rap their opponents. In each round, players are given who they are trying to out-rap, and use their device to fill in various prompts given to them. When one player is done making their rap, they may select any activity on their device. The game then runs through each rap using a text-to-speech voice, and once the two players have out-rapped each other, the other players have to choose the rap that they feel is the best. In-game money is gained by the percentage of votes between the two players and a possible "cheer" cash bonus is awarded if they get all votes. The player with the most money at the end wins.
Zeeple Dome is for 1-6 players. Players are contestants in an alien combat arena, the Zeeple Dome, to take down aliens. The game is physics-based and has players slingshot their characters across the game's levels, working together to defeat enemies and gain power-ups for their team. When each enemy is defeated, in-game money is gained in the level.
Patently Stupid is for 3-8 players and is a game of problem-solving, inventing and funding. In the first round, players individually write out problems that need to be solved. These are randomly distributed among players, who are then given the opportunity to draw and name an invention to solve that problem. Players are then able to present their invention to the other players (either using their own voice or allowing the game to present). The other players then use in-game money to fund the invention. Inventions that surpass a funding minimum get a bonus to their inventor. In the final round, one player has to choose a problem for all players to solve. The player with the most money at the end wins.
The Jackbox Party Pack 6 (2019)
The Jackbox Party Pack 6 was announced in March 2019 during PAX East and was released on October 17, 2019.
Trivia Murder Party 2 is for 1-8 players. It is the sequel to Trivia Murder Party and follows a similar format, taking place in a hotel. In addition to new questions, it includes new "Killing Floor" mini-games (including Quiplash), special items which can help or hinder their ability to survive, and a barrier in the endgame's exit, where players have to answer a question correctly before they can escape (the leading player now sees the third answer at the barrier). Also, the audience is their own player, whereas they were a separate entity in the first Trivia Murder Party.
Role Models is for 3-6 players. Players first vote for one of the five categories and then try to match the other players (including themselves) to one of the items from that category. Points are gained between the players if any of their matches is the majority favorite of the group, and extra points can be won if the player marked their answer as "99% sure" and was correct. In case two or more subjects are tied or one player has not been assigned a role by the others, the players will do an in-game experiment. The player with the most points at the end wins.
Joke Boat is for 3-8 players and has players make jokes based on a selected list of words brainstormed by players at the start of the game. During the first two joke rounds, players are given the start of a joke prompt with a missing word they select from a random selection of the brainstormed words. They then finish the joke. Players are then able to perform their joke (either using their own voice or allowing the game to perform), two at a time. The other players and the audience then vote for their favorite of the two. Points are gained by the number of votes between the two players, and a possible "crushed it" point bonus is awarded if they get all votes. The final round has players take an existing joke setup and try to write a better joke than the original one. The player with the most points at the end wins.
Dictionarium is for 3-8 players and involves players creating a fake dictionary. The game can either be played where the players are given a fake word or a fake slang saying as a prompt. The game is played across three rounds. In the first round, players create a definition and vote for their favorite. Each vote gains points for the player that wrote the definition. In the second round, players create a new word or phrase as a synonym and then vote for their favorite. Each vote gains points for the player that wrote the synonym. In the final round, players create a sentence using the word or phrase and then vote for their favorite. Each vote gains points for the player that wrote the sentence. The player with the most points at the end wins.
Push The Button is for 4-10 players and takes place on a spaceship, where one or more players have been assigned as an alien and the other players, as humans, must eject the aliens before a timer runs out. Each round, one player determines an activity on the ship (such as drawing or writing a response to a question) and selects a number of the other crew to participate. The assigned human players get one prompt, but the alien players get a different one that would likely cause some confusion. The results are shown, and players have the time to determine if any response seems suspicious. In later rounds, alien players have "hacks" they can use to either get the correct human prompt or send the alien prompt instead to a human player. At any time before the timer runs down, one player can "push the button" and select the other player(s) they believe are an alien. All other players then vote if they agree or not. In order for the players to be ejected, a unanimous vote must be passed. If the vote fails, the game continues. If the vote succeeds, the game reveals if the players were correct or incorrect. The alien players win if the human players vote out a human or none of the players push the button before the time is up.
The Jackbox Party Pack 7 (2020)
The Jackbox Party Pack 7 was released on October 15, 2020.
Quiplash 3 is for 3-8 players. It is the third game in the Quiplash series and has the game's signature final round, "The Last Lash", replaced with the "Thriplash", where instead of all players answering the same prompt, each pair of players only receives one prompt instead of the usual two, but must answer with three separate responses. (The host will play with a random player if there is an odd number of players in the game.) The game's two-dimensional style art has also been replaced by clay animation.
The Devils and the Details is for 3-8 players. Players become a family of devils, trying to work together to complete a list of mundane chores in certain scenarios (e.g. while a relative is visiting) with each successful task scoring points towards a net score during one of the three days of an episode. Many chores require verbal communication from one player to another to complete which can create confusion. As the players are devils, they are competing against each other. They can complete "selfish" chores, which provide extra points to the player who completed them but also build the selfishness meter, so the other players have to stop that player from doing the selfish chores. When the selfishness meter is full, it creates a family emergency (e.g. a flooded basement, a burning kitchen or a power outage), lowering the total score bar and making it harder to successfully finish a single day. If a day ends with the family score bar that reached the target score, the game proceeds to the next day. If the game ends when the family score bar doesn't reach the target score on either the first or second days, then the VIP player will have to either retry the day or quit the game after the unsuccessful day. The third day however is a challenge. If the day ends with the family score bar not reaching the target score, the episode will be over with just the final scores and some tasks completed (there is no retrying on this day). If the day ends with all of the tasks completed and the family score bar reaching the target score, then the winner will get a prize after the game.
Champ'd Up is for 3-8 players. Players start by creating their own champions and challengers via a drawing interface with unusual monikers and skills, similar to Tee K.O.s T-shirts. The players' creations are then pitted against each other with players and the audience votes for the best one in each round based on how fitting a character is for the given category. In-game money is gained by the percentage of votes between the players that made their champions/challengers and a "Champ'd Up" cash bonus is rewarded if they get all votes. The player with the most money at the end wins. There is also a card game called Champ'd Up: Slam Down, which is available to purchase after the game.
Talking Points is for 3-8 players. Each person starts by creating three speech titles and then choosing one of the three on their devices. Then, one person, as a presenter, is shown a series of text and picture slides which they are seeing for the first time, and has to use their own voice to talk through these to impress the audience, which votes with their reactions. The other people in the game act as assistants to the presenter to select the next slide that the presenter will see from a random selection, which could either help or throw off the presenter. Points are rewarded to the presenter based on how many times the audience reacted and the graph and to the assistant. The players can also write a comment about the presentation. Then all players name the award they will give out. The player with the most points at the end wins. The game also has a free play game mode.
Blather 'Round is for 2-6 players. The game's style is very similar to Charades, where players have to pick a place, story, thing, or person to describe using sentences. While one player gives hints to what they have chosen with fixed sentences, the other players must try to guess what the presenter is describing. Points are awarded to the presenter and whomever correctly guesses what word they chose, as well as those who contributed a helpful hint. The player with the most points at the end wins.
The Jackbox Party Pack 8 (2021)
The Jackbox Party Pack 8 was released on October 14, 2021, for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X and Series S, Nintendo Switch, Microsoft Windows, MacOS, Linux, Apple TV, iPad, Android TV, Amazon Fire TV, Nvidia Shield TV, and Xfinity X1.
Drawful: Animate is for 3-10 players and is the third game in the Drawful series. The highlight feature is that players create a two-frame animation rather than a single static drawing. Other added features include having three colors to use for drawing and the ability to double down on a guess once per round which awards double points if correct, but if incorrect, awards the player who wrote the fake answer double points instead. A 'friend mode' is also included which has the prompts be about the players.
The Wheel of Enormous Proportions is for 2-8 players and is a trivia game hosted by an all-knowing wheel. At the start of the game, players write a soul-seeking question for the wheel to answer. Players then must earn slices by competing in a trivia round of three questions, each starting with two slices. High scoring players win a slice, with the top scorer winning an extra one. Question types include: selecting answers from a set of twelve, matching answers with their counterpart, guessing what the wheel is thinking based on given clues, writing answers or giving a numerical estimate. On the third question, the top-performing player wins a power slice that can spread across the wheel and when landed on allows them to play a minigame that can impact the scores such as swapping points between two players. During the spinning round, players place their slices onto the wheel and take turns spinning it, earning points when the wheel lands on a slot they picked. The number of points gained increases as the wheel is spun more and points are divided between players who picked the same slot. The spinning round ends when a Spin Meter fills up, and a new trivia round begins. Once a player reaches 20,000 points, any points that player gains from the wheel allows them to spin the Winner Wheel that determines them as the game's winner. The wheel then answers the question the winner wrote at the start of the game.
Job Job is for 3-10 players and is a job interview question game. At the beginning of each round, players answer a number of icebreaker questions in any way they want. Afterwards, all of the responses are shuffled between players where the goal is to answer job interview questions using only words from the icebreaker responses and the question itself. The interview question and the two provided responses are then pitted up and players and the audience vote on their favorite answer. Points are gained via a percentage of votes between the two players, and bonus points are awarded to players whose words were used in a winning answer or if their winning response contains words from three different players. In the final round, instead of interview questions, players create short responses about themselves by answering the same two statements. The player with the most points at the end wins.
The Poll Mine is for 2-10 players. Players are split into two teams of adventurers trapped in a cave by an evil witch. To escape, all players answer an opinion-based poll of eight options in order of preference. Afterwards, each team takes turns opening one from a set of doors, each marked by an answer from the poll. The first round has the teams find the top three most popular answers, while the second has them find the 2nd, 3rd and 4th most popular answers. Each correct answer earns a team a torch, but an incorrect answer causes them to lose a torch. During the final round, the players must open doors from least to most popular. During this round, no torches are gained, but existing torches are still lost from picking an incorrect door. When a team loses their last torch, the other team must pick the correct door to eliminate them and win the game. When everyone correctly opens all doors, the team with the most torches remaining wins. However, if both teams lose all of their torches, it is "game over" for everyone. The game also has a streamer mode where one team consist of the players and the other consisting of the audience who pick a door by majority vote.
Weapons Drawn is for 4-8 players and is a social deduction game. Everyone plays the role of a group of detectives and murderers attending a party. Players draw two murder weapons containing a letter from their name they must hide and name an accomplice to bring as their guest. Players then attempt to murder accomplices by figuring out whoever they think invited them as a guest. If successful, one of the culprit's murder weapons is left at the scene. For reference, the game reveals one weapon drawn by each player. Players vote between two cases to solve and attempt to work together to analyse the murder weapon and vote for who they think committed the crime. During the final round, players in rapid succession, guess every remaining unsolved murder, the murderer gaining points for every detective they fool. Points are given by inviting accomplices that receive a high number of murder attempts, successfully evading being deduced as the culprit and correctly finding the culprits of other murders. The player with the most points at the end wins.
The Jackbox Party Pack 9 (2022)
The Jackbox Party Pack 9 is announced to release in late 2022.
The Jackbox Starter Pack (2022)
A starter pack containing three previously released games, updated to include additional language translations, is expected to be released in mid-2022.
Reception
Jackbox Games said that sales jumped by up to 1,000% from March to May 2020, the first three months of the COVID-19 pandemic shutdown. Though sales since leveled off after that point, the company said that its playerbase still grew, doubling from 100 million players in 2019 to 200 million by October 2020 due to the ongoing pandemic.
PC Gamer said "the Jackbox games are the perfect way to beat the social distancing blues". Wired considered the Party Packs, along with Fall Guys and Among Us, as popular narrative-less games during the pandemic, as they helped to avoid the "cultural trauma" the pandemic had brought.
See also
Use Your Words, a video game similar to games in The Jackbox Party Pack
References
External links
2014 video games
Android (operating system) games
IOS games
Linux games
MacOS games
Nintendo Switch games
Party video games
PlayStation 3 games
PlayStation 4 games
Video game franchises
Video game franchises introduced in 2014
Video games developed in the United States
Windows games
Xbox One games |
9914 | https://en.wikipedia.org/wiki/Executable%20and%20Linkable%20Format | Executable and Linkable Format | In computing, the Executable and Linkable Format (ELF, formerly named Extensible Linking Format), is a common standard file format for executable files, object code, shared libraries, and core dumps. First published in the specification for the application binary interface (ABI) of the Unix operating system version named System V Release 4 (SVR4), and later in the Tool Interface Standard, it was quickly accepted among different vendors of Unix systems. In 1999, it was chosen as the standard binary file format for Unix and Unix-like systems on x86 processors by the 86open project.
By design, the ELF format is flexible, extensible, and cross-platform. For instance it supports different endiannesses and address sizes so it does not exclude any particular central processing unit (CPU) or instruction set architecture. This has allowed it to be adopted by many different operating systems on many different hardware platforms.
File layout
Each ELF file is made up of one ELF header, followed by file data. The data can include:
Program header table, describing zero or more memory segments
Section header table, describing zero or more sections
Data referred to by entries in the program header table or section header table
The segments contain information that is needed for run time execution of the file, while sections contain important data for linking and relocation. Any byte in the entire file can be owned by one section at most, and orphan bytes can occur which are unowned by any section.
File header
The ELF header defines whether to use 32- or 64-bit addresses. The header contains three fields that are affected by this setting and offset other fields that follow them. The ELF header is 52 or 64 bytes long for 32-bit and 64-bit binaries respectively.
Program header
The program header table tells the system how to create a process image. It is found at file offset , and consists of entries, each with size . The layout is slightly different in 32-bit ELF vs 64-bit ELF, because the are in a different structure location for alignment reasons. Each entry is structured as:
Section header
Tools
readelf is a Unix binary utility that displays information about one or more ELF files. A free software implementation is provided by GNU Binutils.
elfutils provides alternative tools to GNU Binutils purely for Linux.
elfdump is a command for viewing ELF information in an ELF file, available under Solaris and FreeBSD.
objdump provides a wide range of information about ELF files and other object formats. objdump uses the Binary File Descriptor library as a back-end to structure the ELF data.
The Unix file utility can display some information about ELF files, including the instruction set architecture for which the code in a relocatable, executable, or shared object file is intended, or on which an ELF core dump was produced.
Applications
Unix-like systems
The ELF format has replaced older executable formats in various environments.
It has replaced a.out and COFF formats in Unix-like operating systems:
Linux
Solaris / Illumos
IRIX
FreeBSD
NetBSD
OpenBSD
Redox
DragonFly BSD
Syllable
HP-UX (except for 32-bit PA-RISC programs which continue to use SOM)
QNX Neutrino
MINIX
Non-Unix adoption
ELF has also seen some adoption in non-Unix operating systems, such as:
OpenVMS, in its Itanium and amd64 versions
BeOS Revision 4 and later for x86 based computers (where it replaced the Portable Executable format; the PowerPC version stayed with Preferred Executable Format)
Haiku, an open source reimplementation of BeOS
RISC OS
Stratus VOS, in PA-RISC and x86 versions
Windows 10 Anniversary Update using the Windows Subsystem for Linux.
Windows 11
SkyOS
Fuchsia OS
Z/TPF
HPE NonStop OS
Deos
Game consoles
Some game consoles also use ELF:
PlayStation Portable, PlayStation Vita, PlayStation (console), PlayStation 2, PlayStation 3, PlayStation 4, PlayStation 5
GP2X
Dreamcast
GameCube
Nintendo 64
Wii
Wii U
PowerPC
Other (operating) systems running on PowerPC that use ELF:
AmigaOS 4, the ELF executable has replaced the prior Extended Hunk Format (EHF) which was used on Amigas equipped with PPC processor expansion cards.
MorphOS
AROS
Mobile phones
Some operating systems for mobile phones and mobile devices use ELF:
Symbian OS v9 uses E32Image format that is based on the ELF file format;
Sony Ericsson, for example, the W800i, W610, W300, etc.
Siemens, the SGOLD and SGOLD2 platforms: from Siemens C65 to S75 and BenQ-Siemens E71/EL71;
Motorola, for example, the E398, SLVR L7, v360, v3i (and all phone LTE2 which has the patch applied).
Bada, for example, the Samsung Wave S8500.
Nokia phones or tablets running the Maemo or the Meego OS, for example, the Nokia N900.
Android uses ELF (shared object) libraries for the Java Native Interface. With Android Runtime (ART), the default since Android 5.0 "Lollipop", all applications are compiled into native ELF binaries on installation.
Some phones can run ELF files through the use of a patch that adds assembly code to the main firmware, which is a feature known as ELFPack in the underground modding culture. The ELF file format is also used with the Atmel AVR (8-bit), AVR32
and with Texas Instruments MSP430 microcontroller architectures. Some implementations of Open Firmware can also load ELF files, most notably Apple's implementation used in almost all PowerPC machines the company produced.
Specifications
Generic:
System V Application Binary Interface Edition 4.1 (1997-03-18)
System V ABI Update (October 2009)
AMD64:
System V ABI, AMD64 Supplement
ARM:
ELF for the ARM Architecture
IA-32:
System V ABI, Intel386 Architecture Processor Supplement
IA-64:
Itanium Software Conventions and Runtime Guide (September 2000)
M32R:
M32R ELF ABI Supplement Version 1.2 (2004-08-26)
MIPS:
System V ABI, MIPS RISC Processor Supplement
MIPS EABI documentation (2003-06-11)
Motorola 6800:
Motorola 8- and 16- bit Embedded ABI
PA-RISC:
ELF Supplement for PA-RISC Version 1.43 (October 6, 1997)
PowerPC:
System V ABI, PPC Supplement
PowerPC Embedded Application Binary Interface 32-Bit Implementation (1995-10-01)
64-bit PowerPC ELF Application Binary Interface Supplement Version 1.9 (2004)
SPARC:
System V ABI, SPARC Supplement
S/390:
S/390 32bit ELF ABI Supplement
zSeries:
zSeries 64bit ELF ABI Supplement
Symbian OS 9:
E32Image file format on Symbian OS 9
The Linux Standard Base (LSB) supplements some of the above specifications for architectures in which it is specified. For example, that is the case for the System V ABI, AMD64 Supplement.
86open
86open was a project to form consensus on a common binary file format for Unix and Unix-like operating systems on the common PC compatible x86 architecture, to encourage software developers to port to the architecture. The initial idea was to standardize on a small subset of Spec 1170, a predecessor of the Single UNIX Specification, and the GNU C Library (glibc) to enable unmodified binaries to run on the x86 Unix-like operating systems. The project was originally designated "Spec 150".
The format eventually chosen was ELF, specifically the Linux implementation of ELF, after it had turned out to be a de facto standard supported by all involved vendors and operating systems.
The group began email discussions in 1997 and first met together at the Santa Cruz Operation offices on August 22, 1997.
The steering committee was Marc Ewing, Dion Johnson, Evan Leibovitch, Bruce Perens, Andrew Roach, Bryan Wayne Sparks and Linus Torvalds. Other people on the project were Keith Bostic, Chuck Cranor, Michael Davidson, Chris G. Demetriou, Ulrich Drepper, Don Dugger, Steve Ginzburg, Jon "maddog" Hall, Ron Holt, Jordan Hubbard, Dave Jensen, Kean Johnston, Andrew Josey, Robert Lipe, Bela Lubkin, Tim Marsland, Greg Page, Ronald Joe Record, Tim Ruckle, Joel Silverstein, Chia-pi Tien, and Erik Troan. Operating systems and companies represented were BeOS, BSDI, FreeBSD, Intel, Linux, NetBSD, SCO and SunSoft.
The project progressed and in mid-1998, SCO began developing lxrun, an open-source compatibility layer able to run Linux binaries on OpenServer, UnixWare, and Solaris. SCO announced official support of lxrun at LinuxWorld in March 1999. Sun Microsystems began officially supporting lxrun for Solaris in early 1999, and later moved to integrated support of the Linux binary format via Solaris Containers for Linux Applications.
With the BSDs having long supported Linux binaries (through a compatibility layer) and the main x86 Unix vendors having added support for the format, the project decided that Linux ELF was the format chosen by the industry and "declare[d] itself dissolved" on July 25, 1999.
FatELF: universal binaries for Linux
FatELF is an ELF binary-format extension that adds fat binary capabilities. It is aimed for Linux and other Unix-like operating systems. Additionally to the CPU architecture abstraction (byte order, word size, CPU instruction set etc.), there is the potential advantage of software-platform abstraction e.g., binaries which support multiple kernel ABI versions. , FatELF has not been integrated into the mainline Linux kernel.
See also
Application binary interface
Comparison of executable file formats
DWARF a format for debugging data
Intel Binary Compatibility Standard
Portable Executable format used by Windows
vDSO virtual DSO
Position-independent code
References
Further reading
Code: Errata:
An unsung hero: The hardworking ELF by Peter Seebach, December 20, 2005, archived from the original on February 24, 2007
The ELF Object File Format: Introduction, The ELF Object File Format by Dissection by Eric Youngdale (1995-05-01)
A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux by Brian Raiter
ELF relocation into non-relocatable objects by Julien Vanegue (2003-08-13)
Embedded ELF debugging without ptrace by the ELFsh team (2005-08-01)
Study of ELF loading and relocs by Pat Beirne (1999-08-03)
External links
FreeBSD Handbook: Binary formats (archived version)
FreeBSD manual page
NetBSD ELF FAQ
Linux manual page
Oracle Solaris Linker and Libraries Guide
The ERESI project : reverse engineering on ELF-based operating systems
Linux Today article on 86open July 26, 1999
Announcement of 86open on Debian Announce mailing list October 10, 1997, Bruce Perens
Declaration of Ulrich Drepper (PDF) in The SCO Group vs IBM, September 19, 2006
86open and ELF discussion on Groklaw, August 13, 2006
Executable file formats |
43310 | https://en.wikipedia.org/wiki/Cygnus%20Solutions | Cygnus Solutions | Cygnus Solutions, originally Cygnus Support, was founded in 1989 by John Gilmore, Michael Tiemann and David Henkel-Wallace to provide commercial support for free software. Its tagline was: Making free software affordable.
For years, employees of Cygnus Solutions were the maintainers of several key GNU software products, including the GNU Debugger and GNU Binutils (which included the GNU Assembler and Linker). It was also a major contributor to the GCC project and drove the change in the project's management from having a single gatekeeper to having an independent committee. Cygnus developed BFD, and used it to help port GNU to many architectures, in a number of cases working under non-disclosure to produce tools used for initial bringup of software for a new chip design.
Cygnus was also the original developer of Cygwin, a POSIX layer and the GNU toolkit port to the Microsoft Windows operating system family, and of eCos, an embedded real-time operating system.
In the 2001 documentary film Revolution OS, Tiemann indicates that the name "Cygnus" was chosen from among several names that incorporated the acronym "GNU". According to Stan Kelly-Bootle, it was recursively defined as Cygnus, your GNU Support.
On November 15, 1999, Cygnus Solutions announced its merger with Red Hat, and it ceased to exist as a separate company in early 2000. , a number of Cygnus employees continue to work for Red Hat, including Tiemann, who serves as Red Hat's Vice President of Open Source Affairs, and formerly served as CTO.
References
External links
.
.
Free software companies
Red Hat
Software companies disestablished in 2000
Software companies established in 1989 |
8134924 | https://en.wikipedia.org/wiki/Comparison%20of%20numerical-analysis%20software | Comparison of numerical-analysis software | The following tables provide a comparison of numerical-analysis software.
Applications
General
Operating system support
The operating systems the software can run on natively (without emulation).
Language features
Colors indicate features available as
Libraries
General
Operating-system support
The operating systems the software can run on natively (without emulation).
See also
Comparison of computer algebra systems
Comparison of deep-learning software
Comparison of statistical packages
List of numerical-analysis software
Footnotes
References
Numerical analysis software |
7878457 | https://en.wikipedia.org/wiki/Computer | Computer | A computer is a digital electronic machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. A computer system is a "complete" computer that includes the hardware, operating system (main software), and peripheral equipment needed and used for "full" operation. This term may also refer to a group of computers that are linked and function together, such as a computer network or computer cluster.
A broad range of industrial and consumer products use computers as control systems. Simple special-purpose devices like microwave ovens and remote controls are included, as are factory devices like industrial robots and computer-aided design, as well as general-purpose devices like personal computers and mobile devices like smartphones. Computers power the Internet, which links billions of other computers and users.
Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (as predicted by Moore's law), leading to the Digital Revolution during the late 20th to early 21st centuries.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, along with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
Etymology
According to the Oxford English Dictionary, the first known use of computer was in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women.
The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine".
History
Pre-20th century
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
The Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to . Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, .
The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
The slide rule was invented around 1620–1630 by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.
In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.
In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which, through a system of pulleys and cylinders and over, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
First computer
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson.
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).
Digital computers
Electromechanical
By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.
Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete.
Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, , which was founded in 1941 as the first company with the sole purpose of developing computers.
Vacuum tubes and digital electronic circuits
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.
During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.
The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
Modern computers
Concept of modern computer
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Stored programs
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1. Grace Hopper was the first person to develop a compiler for programming language.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 and ran the world's first regular routine office computer job.
Transistors
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell.
The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics.
Integrated circuits
The next great advance in computing power came with the advent of the integrated circuit (IC).
The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.
The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce.
Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide in the late 1950s.
Modern monolithic ICs are predominantly MOS (metal-oxide-semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs.
The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.
System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals don't have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.
Mobile computers
The first mobile computers were heavy and ran from mains power. The IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.
These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.
Types
Computers can be classified in a number of different ways, including:
By architecture
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Von Neumann architecture
Complex instruction set computer
Reduced instruction set computer
By size, form-factor and purpose
Supercomputer
Mainframe computer
Minicomputer (term no longer used)
Server
Rackmount server
Blade server
Tower server
Personal computer
Workstation
Microcomputer (term no longer used)
Home computer
Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers, video editing PCs and the like)
Gaming computer
All-in-one PC
Nettop (Small form factor PCs, Mini PCs)
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
Laptop
Desktop replacement computer
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Netbook
Mobile computers:
Tablet computer
Smartphone
Ultra-mobile PC
Pocket PC
Palmtop PC
Handheld PC
Wearable computer
Smartwatch
Smartglasses
Single-board computer
Plug computer
Stick PC
Programmable logic controller
Computer-on-module
System on module
System in a package
System-on-chip (Also known as an Application Processor or AP if it lacks circuitry such as radio circuitry)
Microcontroller
Hardware
The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware.
History of computing hardware
Other hardware topics
A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
Input devices
When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:
Computer keyboard
Digital camera
Digital video
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Light pen
Output devices
The means through which computer gives output are known as output devices. Some examples of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Video card
Control unit
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.
Central processing unit (CPU)
The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.
Arithmetic logic unit (ALU)
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties:
random-access memory or RAM
read-only memory or ROM
RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.
Multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
Multiprocessing
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
Software
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware".
Languages
There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications.
Programs
The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
Stored program architecture
This section applies to most common RAM machine–based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:
begin:
addi $8, $0, 0 # initialize sum to 0
addi $9, $0, 1 # set first number to add = 1
loop:
slti $10, $9, 1000 # check if the number is less than 1000
beq $10, $0, finish # if odd number is greater than n then exit
add $8, $8, $9 # update sum
addi $9, $9, 1 # get next number
j loop # repeat the summing process
finish:
add $2, $8, $0 # put sum in output register
Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.
Machine code
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.
High-level languages
Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
Program design
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies.
The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
Bugs
Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.
Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.
Networking and the Internet
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
Unconventional computers
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, the modern definition of a computer is literally: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." Any device which processes information qualifies as a computer, especially if the processing is purposeful.
Future
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
Computer architecture paradigms
There are many types of computer architectures:
Quantum computer vs. Chemical computer
Scalar processor vs. Vector processor
Non-Uniform Memory Access (NUMA) computers
Register machine vs. Stack machine
Harvard architecture vs. von Neumann architecture
Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing.
Professions and organizations
As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
See also
Glossary of computers
Computability theory
Computer security
Glossary of computer hardware terms
History of computer science
List of computer term etymologies
List of fictional computers
List of pioneers in computer science
Pulse computation
TOP500 (list of most powerful computers)
Unconventional computing
References
Notes
External links
Warhol & The Computer
Consumer electronics
Articles containing video clips
Articles with example code
Electronics industry |
21116821 | https://en.wikipedia.org/wiki/Db4o | Db4o | db4o (database for objects) was an embeddable open-source object database for Java and .NET developers. It was developed, commercially licensed and supported by Actian. In October 2014, Actian declined to continue to actively pursue and promote the commercial db4o product offering for new customers.
History
The term object-oriented database system dates back to around 1985, though the first research developments in this area started during the mid-1970s. The first commercial object database management systems were created in the early 1990s; these added the concept of native database driven persistence into the field of object-oriented development.
The second wave of growth was observed in the first decade of the 21st century, when object-oriented databases written completely in an object-oriented language appeared on the market. db4o is one of the examples of such systems written completely in Java and C#.
The db4o project was started in 2000 by chief architect Carl Rosenberger, shipping in 2001. It was used in enterprise and academic applications prior to its commercial announcement in 2004 by newly created private company Db4objects Inc.
In 2008 db4o was purchased by Versant corporation, which marketed it as open-source bi-licensed software: commercial and the GNU General Public License (GPL).
Overview
db4o represents an object-oriented database model. One of its main goals is to provide an easy and native interface to persistence for object oriented programming languages. Development with db4o database does not require a separate data model creation, the application's class model defines the structure of the data. db4o attempts to avoid the object/relational impedance mismatch by eliminating the relational layer from a software project. db4o is written in Java and .NET and provides the respective APIs. It can run on any operating system that supports Java or .NET. It is offered under licenses including GPL, the db4o Opensource Compatibility License (dOCL), and a commercial license for use in proprietary software.
Developers using relational databases can view db40 as a complementary tool. The db4o-RDBMS data exchange can be implemented using db4o Replication System (dRS). dRS can also be used for migration between object (db4o) and relational (RDBMS) technologies.
As an embedded database db4o can be run in application process. It is distributed as a library (jar/dll).
Features
One-line-of-code database
db4o contains a function to store any object:
objectContainer.store(new SomeClass());
SomeClass here does not require any interface implementations, annotations or attributes added. It can be any application class including third-party classes contained in referenced libraries.
All field objects (including collections) are saved automatically. Special cases can be handled through writing custom type handlers.
Embeddable
db4o is designed to be embedded in clients or other software components invisible to the end user. Thus, db4o needs no separate installation mechanism, but comes as a single library file with a footprint of around 670kB in the .NET version and around 1MB in the Java version.
Client-server mode
Client/server version allows db4o to communicate between client and server-side applications. It uses TCP/IP for client-server communication and allows to configure port number. Communication is implemented through messaging.
Due to a feature referred to as "Generic Reflection", db4o can work without implementing persistent classes on the server. However, this mode has limitations.
Dynamic schema evolution
db4o supports automatic object schema evolution for the basic class model changes (field name deletion/addition). More complex class model modifications, like field name change, field type change, hierarchy move are not automated out-of-the box, but can be automated by writing small utility update program (see documentation).
This feature can be viewed as an advantage over relational model, where any change in the schema results in mostly manual code review and upgrade to match the schema changes.
Native queries
Rather than using string-based APIs (such as SQL, OQL, JDOQL, EJB QL, and SODA), Native Queries (NQ) allow developers to simply use the programming language itself (e.g., Java, C#, or VB.NET) to access the database and thus avoid a constant, productivity-reducing context switch between programming language and data access API. Native Queries also provide type safety, as well as remove the need to sanitize against code injection (see SQL Injection).
LINQ
LINQ support is fully integrated in db4o for .NET version 3.5. LINQ allows the creation of object-oriented queries of any complexity with the benefit of compile-time checking, IDE Intellisense integration and automated refactoring.
Due to integration with some open-source libraries db4o also allows optimized LINQ queries on Compact Framework.
LINQ can be used both against relational and object data storage, thus providing a bridge between them. It can also be used as an abstraction layer, allowing to easily switch the underlying database technology.
Disadvantages
The drawbacks and difficulties faced by other Object Databases also apply to Db4o:
Other things that work against ODBMS seem to be the lack of interoperability with a great number of tools/features that are taken for granted concerning SQL, including but not limited to industry standard connectivity, reporting tools, OLAP tools, and backup and recovery standards. Object databases also lack a formal mathematical foundation, unlike the relational model, and this in turn leads to weaknesses in their query support. However, some ODBMSs fully support SQL in addition to navigational access, e.g. Objectivity/SQL++, Matisse, and InterSystems CACHÉ. Effective use may require compromises to keep both paradigms in sync.
Disadvantages specific to Db4o may include:
Lack of full-text indexing, poor performance on full-text search
Lack of Indexing for string types, meaning text based searches can potentially be very slow
"There is no general query language like SQL which can be used for data analyzing or by other applications. This does not allow db4o to be very flexible in a heterogeneous environment"
Replication cannot be done administratively—i.e. one needs to program an application to achieve replication. "This is contrary to most RDBMS, where administrators manage servers and replication between them."
Deleted fields are not immediately removed, just hidden until the next Defrag
No built-in support to import/export data to/from text, XML or JSON files
Portability and cross-platform deployment
db4o supported Java's JDK 1.1.x through 6.0 and runs on Java EE and Java SE. db4o also runs with Java ME dialects that support reflection, such as CDC, Personal Profile, Symbian OS, SavaJe and Zaurus. Depending on customer demand, db4o will also run on dialects without reflection, such as CLDC, MIDP, BlackBerry and Palm OS.
db4o was successfully tested on JavaFX and Silverlight.
db4o ran on Android.
db4o uses a custom feature called "generic reflector" to represent class information, when class definitions are not available, which allows to use it in a mixed Java-.NET environment, for example Java client - .NET server and vice versa. Generic reflector also aids the conversion of the project between environments, as the database does not have to be converted.
Documentation and support
db4o provides sources of documentation: tutorial, reference documentation, API documentation, online paircasts and blogs. Information can also be retrieved from forums and community additions (articles, translated documentation sources, sample projects etc.).
For commercial users db4o suggests dDN (db4o developer network) subscription with guaranteed 24-hour support and live pairing sessions with the client – Xtreme Connect.
Object Manager
Object Management Enterprise (OME) is a db4o database browsing tool, which is available as a plugin to Eclipse and MS Visual Studio 2005/2008. OME allows the browsing of classes and objects in the database, connection to a database server, building queries using drag&drop and using database statistics.
OME provide some administrative functions as indexing, de-fragmentation and backup.
OME was initially suggested to customers as a commercial product only available to dDN subscribers. From the db4o version 7.8 OME was included into standard db4o distribution and the source was made available to the public in db4o svn repository.
Community
The community of db4o registered members grew to over 60,000 members. Important db4o-related projects, such as standalone Object Manager, encryption support, Mono support etc., are fully driven by community members. db4o's Code Commander program defined the terms and conditions of community project development.
db4o provides free access to its code, documentation, forums and releases to the community members. The community votes for most important features and most critical bugs is taken into consideration when defining the road map and weekly iteration plans.
db4o sometimes held contests allowing the community members to come up with the best suggestion for an improvement, which was later on integrated into the core code.
Versions
db4o releases development, production and stable builds. Development version provides the newest features and is released for testing, community feedback and evaluation. Production version is meant to be used in production environment and includes features that have been already evaluated and proven by time. Stable version is meant to be used in final product shipment.
db4o also runs a continuous build, which is triggered by any new change committed to the SVN code repository. This build is open to community and can be used to evaluate the latest changes and acquire the newest features.
db4o build name format is meant to provide all the necessary information about the version, time of build and supported platform:
For example: db4o-7.2.30.9165-java.zip
db4o – name of the product, i.e. db4o database engine
7.2 – the release number
30 – iteration number, i.e. a sequential number identifying a development week
9165 – SVN revision number, corresponding to the last commit that triggered the build
java – Java version of db4o. .NET version is identified by “net” for .NET 2.0 releases or “net35” for .NET 3.5 version. .NET version includes the corresponding Compact Framework release.
db4o public SVN repository is also available for the developers to get the source code and build versions locally with or without custom modifications.
Below is a short summary of the main features of the stable, production and development builds:
References
Further reading
Stefan Edlich, Jim Paterson, Henrik Hörning, Reidar Hörning, The definitive guide to db4o, Apress, 2006,
Ted Neward, The busy Java developer's guide to db4o, (7-article series), IBM DeveloperWorks
External links
http://drdobbs.com - Article about RETSCAN, a retina scanning system using db4o
Object-oriented database management systems
Free database management systems
NoSQL
Free software programmed in C Sharp
Free software programmed in Java (programming language) |
12824727 | https://en.wikipedia.org/wiki/Sliding%20window%20protocol | Sliding window protocol | A sliding window protocol is a feature of packet-based data transmission protocols. Sliding window protocols are used where reliable in-order delivery of packets is required, such as in the data link layer (OSI layer 2) as well as in the Transmission Control Protocol (TCP). They are also used to improve efficiency when the channel may include high latency.
Packet-based systems are based on the idea of sending a batch of data, the packet, along with additional data that allows the receiver to ensure it was received correctly, perhaps a checksum. The paradigm is similar to a window sliding sideways to allow entry of fresh packets and reject the ones that have already been acknowledged. When the receiver verifies the data, it sends an acknowledgment signal, or "ACK", back to the sender to indicate it can send the next packet. In a simple automatic repeat request protocol (ARQ), the sender stops after every packet and waits for the receiver to ACK. This ensures packets arrive in the correct order, as only one may be sent at a time.
The time that it takes for the ACK signal to be received may represent a significant amount of time compared to the time needed to send the packet. In this case, the overall throughput may be much lower than theoretically possible. To address this, sliding window protocols allow a selected number of packets, the window, to be sent without having to wait for an ACK. Each packet receives a sequence number, and the ACKs send back that number. The protocol keeps track of which packets have been ACKed, and when they are received, sends more packets. In this way, the window slides along the stream of packets making up the transfer.
Sliding windows are a key part of many protocols. It is a key part of the TCP protocol, which inherently allows packets to arrive out of order, and is also found in many file transfer protocols like UUCP-g and ZMODEM as a way of improving efficiency compared to non-windowed protocols like XMODEM.
Basic concept
Conceptually, each portion of the transmission (packets in most data link layers, but bytes in TCP) is assigned a unique consecutive sequence number, and the receiver uses the numbers to place received packets in the correct order, discarding duplicate packets and identifying missing ones. The problem with this is that there is no limit on the size of the sequence number that can be required.
By placing limits on the number of packets that can be transmitted or received at any given time, a sliding window protocol allows an unlimited number of packets to be communicated using fixed-size sequence numbers.
The term "window" on the transmitter side represents the logical boundary of the total number of packets yet to be acknowledged by the receiver. The receiver informs the transmitter in each acknowledgment packet the current maximum receiver buffer size (window boundary). The TCP header uses a 16 bit field to report the receiver window size to the sender. Therefore, the largest window that can be used is 216 = 64 kilobytes.
In slow-start mode, the transmitter starts with low packet count and increases the number of packets in each transmission after receiving acknowledgment packets from receiver. For every ack packet received, the window slides by one packet (logically) to transmit one new packet. When the window threshold is reached, the transmitter sends one packet for one ack packet received.
If the window limit is 10 packets then in slow start mode the transmitter may start transmitting one packet followed by two packets (before transmitting two packets, one packet ack has to be received), followed by three packets and so on until 10 packets. But after reaching 10 packets, further transmissions are restricted to one packet transmitted for one ack packet received. In a simulation this appears as if the window is moving by one packet distance for every ack packet received. On the receiver side also the window moves one packet for every packet received.
The sliding window method ensures that traffic congestion on the network is avoided. The application layer will still be offering data for transmission to TCP without worrying about the network traffic congestion issues as the TCP on sender and receiver side implement sliding windows of packet buffer. The window size may vary dynamically depending on network traffic.
For the highest possible throughput, it is important that the transmitter is not forced to stop sending by the sliding window protocol earlier than one round-trip delay time (RTT). The limit on the amount of data that it can send before stopping to wait for an acknowledgment should be larger than the bandwidth-delay product of the communications link. If it is not, the protocol will limit the effective bandwidth of the link.
Motivation
In any communication protocol based on automatic repeat request for error control, the receiver must acknowledge received packets. If the transmitter does not receive an acknowledgment within a reasonable time, it re-sends the data.
A transmitter that does not get an acknowledgment cannot know if the receiver actually received the packet; it may be that it was lost or damaged in transmission. If the error detection mechanism reveals corruption, the packet will be ignored by the receiver and a negative or duplicate acknowledgement will be sent by the receiver. The receiver may also be configured to not send any acknowledgement at all. Similarly, the receiver is usually uncertain about whether its acknowledgements are being received. It may be that an acknowledgment was sent, but was lost or corrupted in the transmission medium. In this case, the receiver must acknowledge the retransmission to prevent the data being continually resent, but must otherwise ignore it.
Protocol operation
The transmitter and receiver each have a current sequence number nt and nr, respectively. They each also have a window size wt and wr. The window sizes may vary, but in simpler implementations they are fixed. The window size must be greater than zero for any progress to be made.
As typically implemented, nt is the next packet to be transmitted, i.e. the sequence number of the first packet not yet transmitted. Likewise, nr is the first packet not yet received. Both numbers are monotonically increasing with time; they only ever increase.
The receiver may also keep track of the highest sequence number yet received; the variable ns is one more than the sequence number of the highest sequence number received. For simple receivers that only accept packets in order (wr = 1), this is the same as nr, but can be greater if wr > 1. Note the distinction: all packets below nr have been received, no packets above ns have been received, and between nr and ns, some packets have been received.
When the receiver receives a packet, it updates its variables appropriately and transmits an acknowledgment with the new nr. The transmitter keeps track of the highest acknowledgment it has received na. The transmitter knows that all packets up to, but not including na have been received, but is uncertain about packets between na and ns; i.e. na ≤ nr ≤ ns.
The sequence numbers always obey the rule that na ≤ nr ≤ ns ≤ nt ≤ na + wt. That is:
na ≤ nr: The highest acknowledgement received by the transmitter cannot be higher than the highest nr acknowledged by the receiver.
nr ≤ ns: The span of fully received packets cannot extend beyond the end of the partially received packets.
ns ≤ nt: The highest packet received cannot be higher than the highest packet sent.
nt ≤ na + wt: The highest packet sent is limited by the highest acknowledgement received and the transmit window size.
Transmitter operation
Whenever the transmitter has data to send, it may transmit up to wt packets ahead of the latest acknowledgment na. That is, it may transmit packet number nt as long as nt < na+wt.
In the absence of a communication error, the transmitter soon receives an acknowledgment for all the packets it has sent, leaving na equal to nt. If this does not happen after a reasonable delay, the transmitter must retransmit the packets between na and nt.
Techniques for defining "reasonable delay" can be extremely elaborate, but they only affect efficiency; the basic reliability of the sliding window protocol does not depend on the details.
Receiver operation
Every time a packet numbered x is received, the receiver checks to see if it falls in the receive window, nr ≤ x < nr+wr. (The simplest receivers only have to keep track of one value nr=ns.) If it falls within the window, the receiver accepts it. If it is numbered nr, the receive sequence number is increased by 1, and possibly more if further consecutive packets were previously received and stored. If x > nr, the packet is stored until all preceding packets have been received. If x≥ns, the latter is updated to ns=x+1.
If the packet's number is not within the receive window, the receiver discards it and does not modify nr or ns.
Whether the packet was accepted or not, the receiver transmits an acknowledgment containing the current nr. (The acknowledgment may also include information about additional packets received between nr or ns, but that only helps efficiency.)
Note that there is no point having the receive window wr larger than the transmit window wt, because there is no need to worry about receiving a packet that will never be transmitted; the useful range is 1 ≤ wr ≤ wt.
Sequence number range required
So far, the protocol has been described as if sequence numbers are of unlimited size, ever-increasing. However, rather than transmitting the full sequence number x in messages, it is possible to transmit only x mod N, for some finite N. (N is usually a power of 2.)
For example, the transmitter will only receive acknowledgments in the range na to nt, inclusive. Since it guarantees that nt−na ≤ wt, there are at most wt+1 possible sequence numbers that could arrive at any given time. Thus, the transmitter can unambiguously decode the sequence number as long as N > wt.
A stronger constraint is imposed by the receiver. The operation of the protocol depends on the receiver being able to reliably distinguish new packets (which should be accepted and processed) from retransmissions of old packets (which should be discarded, and the last acknowledgment retransmitted). This can be done given knowledge of the transmitter's window size. After receiving a packet numbered x, the receiver knows that x < na+wt, so na > x−wt. Thus, packets numbered x−wt will never again be retransmitted.
The lowest sequence number we will ever receive in future is ns−wt
The receiver also knows that the transmitter's na cannot be higher than the highest acknowledgment ever sent, which is nr. So the highest sequence number we could possibly see is nr+wt ≤ ns+wt.
Thus, there are 2wt different sequence numbers that the receiver can receive at any one time. It might therefore seem that we must have N ≥ 2wt. However, the actual limit is lower.
The additional insight is that the receiver does not need to distinguish between sequence numbers that are too low (less than nr) or that are too high (greater than or equal to ns+wr). In either case, the receiver ignores the packet except to retransmit an acknowledgment. Thus, it is only necessary that N ≥ wt+wr. As it is common to have wr<wt (e.g. see Go-Back-N below), this can permit larger wt within a fixed N.
Examples
The simplest sliding window: stop-and-wait
Although commonly distinguished from the sliding-window protocol, the stop-and-wait ARQ protocol is actually the simplest possible implementation of it. The transmit window is 1 packet, and the receive window is 1 packet. Thus, N = 2 possible sequence numbers (conveniently represented by a single bit) are required.
Ambiguity example
The transmitter alternately sends packets marked "odd" and "even". The acknowledgments likewise say "odd" and "even". Suppose that the transmitter, having sent an odd packet, did not wait for an odd acknowledgment, and instead immediately sent the following even packet. It might then receive an acknowledgment saying "expecting an odd packet next". This would leave the transmitter in a quandary: has the receiver received both of the packets, or neither?
Go-Back-N
Go-Back-N ARQ is the sliding window protocol with wt>1, but a fixed wr=1. The receiver refuses to accept any packet but the next one in sequence. If a packet is lost in transit, following packets are ignored until the missing packet is retransmitted, a minimum loss of one round trip time. For this reason, it is inefficient on links that suffer frequent packet loss.
Ambiguity example
Suppose that we are using a 3-bit sequence number, such as is typical for HDLC. This gives N=23=8. Since wr=1, we must limit wt≤7. This is because, after transmitting 7 packets, there are 8 possible results: Anywhere from 0 to 7 packets could have been received successfully. This is 8 possibilities, and the transmitter needs enough information in the acknowledgment to distinguish them all.
If the transmitter sent 8 packets without waiting for acknowledgment, it could find itself in a quandary similar to the stop-and-wait case: does the acknowledgment mean that all 8 packets were received successfully, or none of them?
Selective repeat
The most general case of the sliding window protocol is Selective Repeat ARQ. This requires a much more capable receiver, which can accept packets with sequence numbers higher than the current nr and store them until the gap is filled in.
The advantage, however, is that it is not necessary to discard following correct data for one round-trip time before the transmitter can be informed that a retransmission is required. This is therefore preferred for links with low reliability and/or a high bandwidth-delay product.
The window size wr need only be larger than the number of consecutive lost packets that can be tolerated. Thus, small values are popular; wr=2 is common.
Ambiguity example
The extremely popular HDLC protocol uses a 3-bit sequence number, and has optional provision for selective repeat. However, if selective repeat is to be used, the requirement that nt+nr ≤ 8 must be maintained; if wr is increased to 2, wt must be decreased to 6.
Suppose that wr =2, but an unmodified transmitter is used with wt =7, as is typically used with the go-back-N variant of HDLC. Further suppose that the receiver begins with nr =ns =0.
Now suppose that the receiver sees the following series of packets (all modulo 8):
0 1 2 3 4 5 6 (pause) 0
Because wr =2, the receiver will accept and store the final packet 0 (thinking it is packet 8 in the series), while requesting a retransmission of packet 7. However, it is also possible that the transmitter failed to receive any acknowledgments and has retransmitted packet 0. In this latter case, the receiver would accept the wrong packet as packet 8.
The solution is for the transmitter to limit wt ≤6. With this restriction, the receiver knows that if all acknowledgments were lost, the transmitter would have stopped after packet 5. When it receives packet 6, the receiver can infer that the transmitter received the acknowledgment for packet 0 (the transmitter's na ≥1), and thus the following packet numbered 0 must be packet 8.
Extensions
There are many ways that the protocol can be extended:
The above examples assumed that packets are never reordered in transmission; they may be lost in transit (error detection makes corruption equivalent to loss), but will never appear out of order. The protocol can be extended to support packet reordering, as long as the distance can be bounded; the sequence number modulus N must be expanded by the maximum misordering distance.
It is possible to not acknowledge every packet, as long as an acknowledgment is sent eventually if there is a pause. For example, TCP normally acknowledges every second packet.
It is common to inform the transmitter immediately if a gap in the packet sequence is detected. HDLC has a special REJ (reject) packet for this.
The transmit and receive window sizes may be changed during communication, as long as their sum remains within the limit of N. Normally, they are each assigned maximum values that respect that limit, but the working value at any given time may be less than the maximum. In particular:
It is common to reduce the transmit window size to slow down transmission to match the link's speed, avoiding saturation or congestion.
One common simplification of selective-repeat is so called SREJ-REJ ARQ. This operates with wr=2 and buffers packets following a gap, but only allows a single lost packet; while waiting for that packet, wr=1 and if a second packet is lost, no more packets are buffered. This gives most of the performance benefit of the full selective-repeat protocol, with a simpler implementation.
See also
Federal Standard 1037C
Compound TCP
Serial number arithmetic
TCP Fast Open
References
Comer, Douglas E. "Internetworking with TCP/IP, Volume 1: Principles, Protocols, and Architecture", Prentice Hall, 1995.
External links
RFC 1323 - TCP Extensions for High Performance
TCP window scaling and broken routers, 2004
Sliding Window Demo (Flash required)
Communication
Data transmission |
57396855 | https://en.wikipedia.org/wiki/WebAuthn | WebAuthn | Web Authentication (WebAuthn) is a web standard published by the World Wide Web Consortium (W3C). WebAuthn is a core component of the FIDO2 Project under the guidance of the FIDO Alliance. The goal of the project is to standardize an interface for authenticating users to web-based applications and services using public-key cryptography.
On the client side, support for WebAuthn can be implemented in a variety of ways. The underlying cryptographic operations are performed by an authenticator, which is an abstract functional model that is mostly agnostic with respect to how the key material is managed. This makes it possible to implement support for WebAuthn purely in software, making use of a processor's trusted execution environment or a Trusted Platform Module (TPM). Sensitive cryptographic operations can also be offloaded to a roaming hardware authenticator that can in turn be accessed via USB, Bluetooth Low Energy, or near-field communications (NFC). A roaming hardware authenticator conforms to the FIDO Client to Authenticator Protocol (CTAP), making WebAuthn effectively backward compatible with the FIDO Universal 2nd Factor (U2F) standard.
Similar to legacy U2F, Web Authentication is resilient to verifier impersonation, that is, it is resistant to active man-in-the-middle-attacks, but unlike U2F, WebAuthn does not require a traditional password. Moreover, a roaming hardware authenticator is resistant to malware since the private key material is at no time accessible to software running on the host machine.
The WebAuthn Level 1 standard was published as a W3C Recommendation on 4 March 2019. A Level 2 specification is under development. A Level 3 specification is currently a First Public Working Draft (FPWD).
Background
FIDO2 is the successor of the FIDO Universal 2nd Factor (U2F) legacy protocol. FIDO2 authentication has all the advantages of U2F—the primary difference is that a FIDO2 authenticator can also be a single multi-factor (passwordless) authenticator. U2F protocol is designed to act as a second factor to strengthen existing username/password-based login flows.
A FIDO2 authenticator may be used in either single-factor mode or multi-factor mode. In single-factor mode, the authenticator is activated by a test of user presence, which usually consists of a simple button push. In multi-factor mode, the authenticator (something you have) performs user verification. Depending on the authenticator capabilities, this can be:
something you know: a secret such as a PIN, passcode or swipe pattern
something you are: a biometric such as fingerprint, iris or voice
In any case, the authenticator performs user verification locally on the device. A secret or biometric stored on the authenticator is not shared with the website. Moreover, a single secret or biometric works with all websites, as the authenticator will select the correct cryptographic key material to use for the service requesting authentication after user verification was completed successfully.
A secret and biometric on the authenticator can be used together, similarly to how they would be used on a smartphone. For example, a fingerprint is used to provide convenient access to your smartphone but occasionally fingerprint access fails, in which case a PIN can be used.
Overview
Like its predecessor FIDO U2F, W3C Web Authentication (WebAuthn) involves a website, a web browser, and an authenticator:
The website is a conforming WebAuthn Relying Party
The browser is a conforming WebAuthn Client
The authenticator is a FIDO2 authenticator, that is, it is assumed to be compatible with the WebAuthn Client
WebAuthn specifies how a claimant demonstrates possession and control of a FIDO2 authenticator to a verifier called the WebAuthn Relying Party. The authentication process is mediated by an entity called the WebAuthn Client, which is little more than a conforming web browser.
Authentication
For the purposes of illustration, we assume the authenticator is a roaming hardware authenticator (see below for other options). In any case, the authenticator is a multi-factor cryptographic authenticator that uses public-key cryptography to sign an authentication assertion targeted at the WebAuthn Relying Party. Assuming the authenticator uses a PIN for user verification, the authenticator itself is something you have while the PIN is something you know.
To initiate the WebAuthn authentication flow, the WebAuthn Relying Party indicates its intentions to the WebAuthn Client (i.e., the browser) via JavaScript. The WebAuthn Client communicates with the authenticator using a JavaScript API implemented in the browser. A roaming authenticator conforms to the FIDO Client to Authenticator Protocol.
WebAuthn does not strictly require a roaming hardware authenticator. Alternatively, a software authenticator (implemented on a smartphone, e.g.) or a platform authenticator (i.e., an authenticator implemented directly on the WebAuthn Client Device) may be used. Relevant examples of platform authenticators include Windows Hello and the Android operating system.
The illustrated flow relies on PIN-based user verification, which, in terms of usability, is only a modest improvement over ordinary password authentication. In practice, the use of biometrics for user verification can improve the usability of WebAuthn. The logistics behind biometrics are still poorly understood, however. There is a lingering misunderstanding among users that biometric data is transmitted over the network in the same manner as passwords, which is not the case.
Registration
When the WebAuthn Relying Party receives the signed authentication assertion from the browser, the digital signature on the assertion is verified using a trusted public key for the user. How does the WebAuthn Relying Party obtain that trusted public key in the first place?
To obtain a public key for the user, the WebAuthn Relying Party initiates a WebAuthn registration flow that is very similar to the authentication flow illustrated above. The primary difference is that the authenticator now signs an attestation statement with its attestation private key. The signed attestation statement contains a copy of the public key that the WebAuthn Relying Party ultimately uses to verify a signed authentication assertion. The attestation statement also contains metadata describing the authenticator itself.
The digital signature on the attestation statement is verified with the trusted attestation public key for that particular model of authenticator. How the WebAuthn Relying Party obtains its store of trusted attestation public keys is unspecified. One option is to use the FIDO metadata service.
The attestation type specified in the JavaScript determines the trust model. For instance, an attestation type called self-attestation may be desired, for which the trust model is essentially trust on first use.
Support
The WebAuthn Level 1 standard was published as a W3C Recommendation by the Web Authentication Working Group on 4 March 2019. WebAuthn is supported by the following web browsers: Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari and the Opera web browser.
The desktop version of Google Chrome has supported WebAuthn since version 67. Firefox, which had not fully supported the previous FIDO U2F standard, included and enabled WebAuthn in Firefox version 60, released on May 9, 2018. An early Windows Insider release of Microsoft Edge (Build 17682) implemented a version of WebAuthn that works with both Windows Hello as well as external security keys.
Existing FIDO U2F security keys are largely compatible with the WebAuthn standard, though WebAuthn added the ability to reference a unique per-account "user handle" identifier, which older authenticators are unable to store. One of the first FIDO2-compatible authenticators was the second-generation Security Key by Yubico, announced on April 10, 2018.
The first Security Level 2 certified FIDO2 key, called "Goldengate" was announced one year later by eWBM on April 8, 2019. and
Dropbox announced support for WebAuthn logins (as a 2nd factor) on May 8, 2018.
Apple announced that Face ID or Touch ID could be used as a WebAuthn platform authenticator with Safari on June 24, 2020.
API
WebAuthn implements an extension of the W3C's more general Credential Management API, which is an attempt to formalize the interaction between websites and web browsers when exchanging user credentials. The Web Authentication API extends the Credential Management navigator.credentials.create() and navigator.credentials.get() JavaScript methods so they accept a publicKey parameter. The create() method is used for registering public key authenticators as part of associating them with user accounts (possibly at initial account creation time but more likely when adding a new security device to an existing account) while the get() method is used for authenticating (such as when logging in).
To check if a browser supports WebAuthn, scripts should check if the window.PublicKeyCredential interface is defined. In addition to PublicKeyCredential, the standard also defines the AuthenticatorResponse, AuthenticatorAttestationResponse, and AuthenticatorAssertionResponse interfaces in addition to a variety of dictionaries and other datatypes.
The API does not allow direct access to or manipulation of private keys, beyond requesting their initial creation.
Reception
In August 2018, Paragon Initiative Enterprises conducted a security audit of the WebAuthn standard. While they could not find any specific exploits, they revealed some serious weaknesses in the way the underlying cryptography is used and mandated by the standard.
The main points of criticism revolve around two potential issues that were problematic in other cryptographic systems in the past and therefore should be avoided in order to not fall victim to the same class of attacks:
Through the mandated use of COSE (RFC 8152) WebAuthn also supports RSA with PKCS1v1.5 padding. This particular scheme of padding is known to be vulnerable to specific attacks for at least twenty years and it has been successfully attacked in other protocols and implementations of the RSA cryptosystem in the past. It is difficult to exploit under the given conditions in the context of WebAuthn, but given that there are more secure cryptographic primitives and padding schemes, this is still a bad choice and is not considered to be best practice among cryptographers any more.
The FIDO Alliance standardized on the asymmetric cryptographic scheme ECDAA. This is a version of direct anonymous attestation based on elliptic curves and in the case of WebAuthn is meant to be used to verify the integrity of authenticators, while also preserving the privacy of users, as it does not allow for global correlation of handles. However, ECDAA does not incorporate some of the lessons that were learned in the last decades of research in the area of elliptic curve cryptography, as the chosen curve has some security deficits inherent to this type of curve, which reduces the security guarantees quite substantially. Furthermore, the ECDAA standard involves random, non-deterministic, signatures, which already has been a problem in the past.
Paragon Initiative Enterprises also criticized how the standard was initially developed, as the proposal was not made public in advance and experienced cryptographers were not asked for suggestions and feedback. Hence the standard was not subject to broad cryptographic research from the academic world.
Despite these shortcomings, Paragon Initiative Enterprises still encourage users to continue to use WebAuthn but have come up with some recommendations for potential implementors and developers of the standard that they hope can be implemented before the standard is finalized. Avoiding such mistakes as early as possible would prevent the industry from any challenges that are introduced by broken standards and the need for backwards compatibility.
ECDAA was only designed to be used in combination with device attestation. This particular feature of WebAuthn is not necessarily required for authentication to work. Current implementations allow the user to decide whether an attestation statement is sent during the registration ceremony. Independently, relying parties can choose to require attestation or not. ECDAA was removed from WebAuthn Level 2 as it was not implemented by browsers nor relying parties.
References
External links
Web Authentication: An API for accessing Public Key Credentials Level 1
Web Authentication Working Group
Web Authentication API on MDN
WebAuthn Awesome
Authentication methods
Identification
World Wide Web Consortium standards
Internet security
Web technology |
37839932 | https://en.wikipedia.org/wiki/DSPACE%20GmbH | DSPACE GmbH | dSPACE GmbH (digital signal processing and control engineering), located in Paderborn, Germany (North Rhine-Westphalia), is one of the world's leading providers of tools for developing electronic control units.
dSPACE GmbH has Project Centers in Pfaffenhofen (near Munich), Böblingen (near Stuttgart), and Wolfsburg, and cooperates with the autonomous local dSPACE companies situated in the US, UK, France, Japan, China, Korea and Croatia. Various distributors represent dSPACE in other overseas markets.
Application fields
dSPACE provides tools for developing, testing and calibrating electronic control units (ECUs) in the automotive, aerospace and medical engineering industries, as well as in industrial automation and mechatronics. In most cases, the process of developing and testing ECUs is based on the five phases of the V-cycle. dSPACE's hardware and software cover four of these five phases, but not the first phase, control design.
Control design
The control design phase involves developing the control algorithms that will run on an ECU, usually by modeling them graphically. This process can be performed with Simulink, modeling software from MathWorks, and is outside dSPACE's application fields.
Rapid control prototyping (RCP)
In rapid control prototyping, control algorithms are taken from a mathematical model and implemented as a real-time application so that the control strategies can be tested with the actual controlled system, such as a car or a robot. Simulink is used as the input and simulation tool, and Simulink Coder, also from MathWorks, is used as the code generator. dSPACE provides the necessary hardware platform consisting of a processor and interfaces for sensors and actuators, plus the Simulink blocks needed to integrate the interfaces into the Simulink model (Real-Time Interface, RTI).
Production code generation / ECU autocoding
In a development process based on mathematical models, the models are designed with graphical software, and then automatic production code generators are used to translate the models directly into code for ECUs/controllers. When a model's behavior has been validated, the code generator has to reliably transfer it to the target processor, whose resources are usually designed for the greatest possible cost-efficiency. In other words, the final production ECU generally has less memory and processing power than the RCP system on which the algorithm was developed and tested. As a result, the C code (production code) generated for the target processor has to meet stringent requirements regarding execution time and efficiency. Since 1999, dSPACE markets its production code generator TargetLink, which is integrated into Simulink, the environment for model-based development. In addition to performing the actual autocoding, including code generation for AUTOSAR software components, TargetLink also makes it possible for developers to compare the behavior of the generated code with that of the original Simulink model (by means of software-in-the-loop (SIL) and processor-in-the-loop (PIL) simulation).
Hardware-in-the-Loop (HIL)-Simulation
In HIL simulation, a simulator mimics the environment in which an ECU will function: a car, an airplane, a robot, etc. First the ECU's inputs and outputs are connected to the simulator's inputs and outputs. In the next step, the simulator executes a real-time model of the ECU's working environment, which can consist of Automotive Simulation Models (ASMs) from dSPACE or of models from other vendors. This method provides a way to test new functions reproducibly in a safe environment, before a prototype of the product has even been produced. As with rapid control prototyping, Simulink models are the foundation.
The advantage of HIL simulation in comparison with ECU tests in real prototype vehicles is that the tests on the control unit can be performed already during the development process. Errors are detected and eliminated very early and cost-efficiently.
Calibration / parameterization
Optimizing the control functions so that they fit specific applications is an integral part of ECU and controller development. To achieve this, the parameters of the ECUs are adjusted during ECU calibration. dSPACE offers software and hardware for this task.
Company history
1988: dSPACE is founded by Herbert Hanselmann and three other research associates at the Institute of Mechatronics at the University of Paderborn, Germany.
1991: First local dSPACE company outside Germany opens (dSPACE Inc.) Initially outside Detroit USA in Southfield, Michigan, relocated to Wixom in 2007.
2001: Local dSPACE companies are opened in France (dSPACE SARL, Paris) and the UK (dSPACE Ltd., Cambridge); and a second Project Center is opened (near Stuttgart)
2006: The local dSPACE company in Japan is opened (dSPACE K.K.). Initially in Yokohama, relocated to Tokyo in 2007.
2008: The company's 20th anniversary. The local dSPACE company in China (dSPACE Mechatronic Control Technology (Shanghai) Co., Ltd.) is founded, and Herbert Hanselmann receives the "Entrepreneur Of The Year 2008" award
2010: dSPACE GmbH relocates to the new campus in Paderborn, Germany.
2018: The local dSPACE company in Croatia is opened (dSPACE Engineering d.o.o.) in Zagreb.
History of dSPACE products
1988: First real-time development system for control technology/mechatronics, based on a digital signal processor
1989: First hardware-in-the-loop (HIL) simulator is shipped
1990: First real-time development system with a floating-point processor is shipped
1992: RTI, first real-time system connected to MATLAB/Simulink
1994: First multiprocessor hardware for real-time development systems
1995: First turnkey (HIL) simulator for an ABS/ESP test bench
1999: MicroAutoBox, a complete prototyping system for in-vehicle use
1999: TargetLink, the first production code generator for ECUs based on MATLAB/Simulink
2003: CalDesk, a component of the dSPACE calibration system
2005: RapidPro, a modular system for signal conditioning and power stages
2005: Automotive Simulation Models (ASMs), real-time automotive simulation models based on MATLAB/Simulink
2007: SystemDesk, tool for developing complex ECU software architectures based on the AUTOSAR concept
2010: MicroAutoBox II, second generation of the vehicle-capable prototyping systems
2011: SCALEXIO, the new hardware-in-the-loop system, including new ConfigurationDesk configuration software
2012: VEOS, PC-based simulation platform for early validation of ECU software
2015: MicroLabBox: Compact prototyping unit for the laboratory
External links
References
Software companies of Germany
Electronics companies of Germany
Paderborn |
24977717 | https://en.wikipedia.org/wiki/Penelope%20Garcia | Penelope Garcia | Penelope Grace Garcia is a fictional character on the CBS crime dramas Criminal Minds and its short-lived spin-off Criminal Minds: Suspect Behavior, portrayed by Kirsten Vangsness. She is the technical analyst of the Behavioral Analysis Unit that is the center of both shows. She has also made multiple guest appearances on Criminal Minds: Beyond Borders, making her the only character in the franchise to appear in all three of its series.
Life before the BAU
Garcia is from San Francisco, California. A drunk driver killed her parents in a car accident when she was eighteen, and she now helps counsel the families of murder victims in her spare time. Garcia has stated that after her parents died, she dropped out of Caltech and went "underground" but continued to teach herself computer coding. She had been placed on one of the FBI's hacker lists (she was one of a small handful of extremely useful or dangerous hackers in the world), and they recruited her from there. It has also been mentioned, when she was not allowed to travel with the team to Langley, that she was on the CIA's "lists" as well. Fellow BAU agent Jennifer Jareau (A.J. Cook) joked that Garcia belonged to that list when she (successfully) tried to hack the CIA for information (namely, Prince William's telephone number) and information on Diana, Princess of Wales' death and other government conspiracies.
Penelope is into online games, specifically MMOGs, as she was once seen playing a game about Camelot on the BAU network, constantly virtually meeting with "Sir Kneighf", an online alter ego who turns out to be serial killer Randall Garner (Charles Haid), who is keeping a young woman prisoner while sending the team several clues that, with tremendous help from agent Spencer Reid (Matthew Gray Gubler), they use to catch him and save the woman. Garner hacked into Garcia's computer and accessed files about the BAU, then used the personal information to find out their whereabouts so he could send the clues there. At the end of the Season 6 episode "Compromising Positions", Supervisory Agent Aaron Hotchner (Thomas Gibson) comments that Garcia submitted her handwritten resume on pink stationery when she applied for a job at the FBI.
The Season 9 episode "The Black Queen" reveals that Garcia was arrested by the FBI in San Jose, California after she hacked into the computer systems of a cosmetics company, prompted by outrage over its use of animals in product testing. Hotchner offered her a choice of facing prosecution or joining the BAU to help track down serial killers; when she reacted with disdain, he noted her scrupulous morality in choosing targets. Penelope accepted the job and hand-wrote a résumé for Hotchner to give to the FBI's human resources department, using the aforementioned pink stationery that she had in her purse.
Criminal Minds
Garcia is unabashedly emotional, which sometimes makes her job with the BAU more difficult. She has broken down, crying several times while examining the brutal acts of violence that the agency deals with on a daily basis. However, according to Hotchner, she "fills her office with figurines and color to remind herself to smile as the horror fills her screens". Garcia is, on the whole, an optimist. She has managed to remain so, even though the job occasionally requires her to dig into people's secret lives, to "find the god-awful thing that happened to them that made them do the god-awful thing to someone else". Many team members have commented in various ways, that her optimism is an aid to them, that (as Hotchner says) they would never want her to change.
In the episode "Lucky", Garcia is shot by Jason Clark Battle (Bailey Chase), with whom she had been on a date. Garcia survives, but Battle, who turns out to be a serial killer obsessed with becoming a celebrity, continues to stalk her in order to become infamous as the FBI's "archnemesis". When he infiltrates the BAU's headquarters, she alerts the rest of the team, which results in Battle taking hostages; Jareau kills him in order to protect the rest of the team After this incident, Morgan insists that Garcia keep a gun; however, it is never shown whether she took this advice.
Garcia seems to have been the one most hurt by Jareau transferring out of the Behavioral Analysis Unit. In the Season 6 episode "Compromising Positions", Garcia volunteered to replace Jareau as their new Media Liaison, but soon discovered that she was not suited for the job. She returned to her analyst duties after the case was closed, but was given the option to travel to crime scenes with the team as needed.
She is devastated when Agent Emily Prentiss (Paget Brewster) apparently dies in the episode "Lauren" after being stabbed by her nemesis, Ian Doyle (Timothy V. Murphy). In "Hanley Waters", she is interviewed by Hotchner about Prentiss' "death" and says she wants to talk about the times when Prentiss made her happy instead of about her being gone.
Garcia is extremely and emotionally excited when she learns that Prentiss is alive, having faked her death to go into witness protection. She is also quick to forgive her - along with Jareau and Hotchner, who knew the truth - for the deception. It is also revealed that she has been taking care of Prentiss' cat, Sergio, and when Emily inquires about him, Garcia promptly demands visitation rights.
Garcia is afraid of losing loved ones, as she risked her career by taking down a federal website to stop her boyfriend, Kevin Lynch (Nicholas Brendon), from being transferred out of country for a job. Details about why her parents died in a car accident are revealed in season 7 when Garcia is overseeing a support group and sharing her loss, telling the group they died looking for Garcia after she didn't return home by curfew.
Her moment of truth came in the 9th season episode "Demons" when she was forced to shoot an assassin sent to kill Reid by a corrupt sheriff's deputy. Reid had been in the hospital after being shot in the previous episode. In an attempt to get some closure, she communicates with the man she shot while he's on death row and later goes to his execution.
Garcia is especially close to Jareau, Prentiss, and Derek Morgan (Shemar Moore). She and the latter are best friends, and frequently engage in mock-flirtatious banter while working together on cases. Garcia is godmother of both Jareau's sons and Morgan's son as well. After Morgan's departure and the addition of Luke Alvez (Adam Rodriguez) to the BAU, Garcia becomes uncharacteristically distant and withdrawn from him, frequently calling him 'Newbie.' Later, Alvez and she find some common ground after she discovers that Roxie, who she assumed was Luke's girlfriend, is actually his dog. In the season twelve finale, Morgan returns after serial killer Peter Lewis (Bodhi Elfman) tried to coax out the team and asked Garcia to cut him some slack, despite her reluctance.
In Season 11, Garcia is forced to go into protective custody at the FBI Academy in Quantico, Virginia after the team finds out that a hitman group is targeting her for investigating them. Garcia makes this connection after the team arrests a member of the group who explains their target is "The Dirty Dozen", which Garcia explains refers to the fact she uses twelve botnets to conduct online investigations. The team eventually captures the remaining members of the group during a sting operation in "Entropy".
In season 13, Garcia ends up going to the field with Agent Matt Simmons (Daniel Henney) to track down Lewis. She also showed that she had not yet been able to overcome the trauma from her shooting. Morgan ended up coming back to give her emotional support.
In the episode "All You Can Eat" she returns to California to testify at the parole hearing of the man who killed her parents. At first, she wants to make sure he stays in prison, but after a particularly emotional case, she decides to forgive him and does not oppose his release.
She reveals to Alvez in the episode "Saturday" that she has a Russian stalker during a case involving one of her students at a Hacker Workshop at BAU headquarters
In episode 9 of season 15, she reveals that she is considering accepting a job offer at an independent institution. In the final episode, the team celebrates Garcia's departure and Alvez invites her to dinner during her team's farewell party.
Criminal Minds: Suspect Behavior
Garcia also appeared in the spin-off Criminal Minds: Suspect Behavior as a series regular.
Criminal Minds: Beyond Borders
Garcia also appeared in the spin-off Criminal Minds: Beyond Borders. Her first appearance was in season one in a scene with Russ "Monty" Montgomery (Tyler James Williams), who playfully hides Garcia's octopus mug. In July 2016, it was announced that Garcia would make several appearances in season 2.
References
External links
Criminal Minds characters
Fictional Federal Bureau of Investigation personnel
Fictional hackers
Television characters introduced in 2005
Crossover characters in television
Fictional California Institute of Technology people
Fictional characters from San Francisco
American female characters in television
Fictional orphans
sv:Criminal Minds#Huvudpersoner |
5856098 | https://en.wikipedia.org/wiki/Irish%20Free%20Software%20Organisation | Irish Free Software Organisation | The Irish Free Software Organisation (or IFSO) is a member organisation based in the Republic of Ireland which works to promote the use of free software in Ireland, and oppose legal or political developments which would interfere with the use or development of Free Software.
It is an associate organization of Free Software Foundation Europe (FSFE), with which it continues to maintain close ties.
History
IFSO was founded in January 2004 with the aims of promoting and protecting the freedom to study, modify and redistribute Free Software.
IFSO was founded as an extension of work on the EU Software Patents directive being performed by an ad hoc group, who perceived a threat to the Free Software community from that legislation. The organisation was intended to foster the Free Software community in Ireland, and to continue this legal and political work in a coherent manner.
Activities
IFSO has been involved in organising several public lectures on Free Software, Software Patentability and other related topics.
It has lobbied on the subject of the EU Software Patents directive, and other elements of European and Irish legislation.
It has worked to raise awareness and promote the use of Free Software (for example, by participating in Software Freedom Day)
Structure
Although IFSO is a membership organisation, with a committee to provide structure, formal membership is considered less important than an individual's willingness to participate and take initiative. IFSO frequently collaborates with related organisations.
External links
The IFSO website
IFSO project: software patents
Transcript from an IFSO event "Preventing Software Patents: How and Why"
IFSO project: GPLv3
IFSO project: promotion
Free Software Foundation Europe
Slashdot story on two GPLv3 talk transcripts published by IFSO
Free and open-source software organizations
2004 establishments in Ireland
Organizations established in 2004
Political organisations based in the Republic of Ireland |
22626822 | https://en.wikipedia.org/wiki/PC/TCP%20Packet%20Driver | PC/TCP Packet Driver | PC/TCP Packet Driver is a networking API for MS-DOS, PC DOS, and later x86 DOS implementations such as DR-DOS, FreeDOS, etc. It implements the lowest levels of a TCP/IP stack, where the remainder is typically implemented either by TSR drivers or as a library linked into an application program. It was invented in 1983 at MIT's Lab for Computer Science (CSR/CSC group under Jerry Saltzer and David D. Clark), and was commercialized in 1986 by FTP Software.
A packet driver uses an x86 interrupt number (INT) between The number used is detected at runtime, it is most commonly 60h but may be changed to avoid application programs which use fixed interrupts for internal communications. The interrupt vector is used as a pointer (4-bytes little endian) to the address of a possible interrupt handler. If the text string "PKT DRVR" is found within the first 12-bytes immediately following the entry point then a driver has been located.
Packet drivers can implement many different network interfaces, including Ethernet, Token Ring, RS-232, Arcnet, and X.25.
Functions
Drivers
WinPKT is a driver that enables use of packet drivers under Microsoft Windows that moves around applications in memory.
W3C507 is a DLL to packet driver for the Microsoft Windows environment.
Support for Ethernet alike network interface over (using 8250 UART), CSLIP, , IPX, Token Ring, LocalTalk, ARCNET.
See also
Crynwr Collection - alternative free packet driver collection
Network Driver Interface Specification (NDIS) - developed by Microsoft and 3Com, free wrappers
Open Data-Link Interface (ODI) - developed by Apple and Novell
Universal Network Device Interface (UNDI) - used by Intel PXE
Uniform Driver Interface (UDI) - defunct
Preboot Execution Environment - network boot by Intel, widespread
References
Computer networks
Device drivers |
17242567 | https://en.wikipedia.org/wiki/Kernel%20marker | Kernel marker | Kernel markers were a static kernel instrumentation support mechanism for Linux kernel source code, allowing special tools such as LTTng or SystemTap to trace information exposed by these probe points. Kernel markers were declared in the kernel code by one-liners of the form:
trace_mark(name, format_string, ...);
Where name is the marker's unique name, and format_string describes the remaining arguments' types.
A marker can be on or off depending on whether a probe is connected to it or not. Code which wants to hook into a trace point first calls:
int marker_probe_register(const char *name, const char *format_string, marker_probe_func *probe, void *pdata);
to register its probe callback with the marker point (pdata is a private data value that the code wants to pass to the probe). Later, the probe is turned on and off using:
int marker_arm(const char *name);
int marker_disarm(const char *name);
Using markers has a negligible overhead thanks in part to Immediate Values, another support mechanism that embeds switches in the code that can be dynamically turned on and off, without using a memory reference and thus saving cache lines.
The initial motivation to create this static instrumentation infrastructure was the large performance overhead induced by the predating dynamic instrumentation mechanism Kprobe mechanism, which depends on breakpoints. Static instrumentation can also more easily survive source code changes because the markers are in the source code.
Kernel Markers consisted essentially of a C preprocessing macro which added, in the instrumented function, a branch over a function call. By doing so, neither the stack setup nor the function call are executed when instrumentation is not enabled. By identifying the branch executing stack setup and
function call as unlikely (using the gcc built-in expect()), a hint is given to the compiler to position the tracing instructions away from cache lines involved in standard kernel execution.
Two Kernel Markers drawbacks were identified which led to its replacement by Tracepoints:
Type verification was limited to scalar types because the API is based on format strings. This could be problematic if pointers must be dereferenced by the tracer code.
The Markers "hide" the instrumentation in the source code, keeping no global registry of the instrumentation. This makes namespace conventions and tracking of instrumentation modification difficult unless the whole kernel tree is monitored.
A patch-set implementing them was merged into version 2.6.24, which was released on January 24, 2008. To address issues regarding kernel markers, Mathieu Desnoyers, their original author, implemented a simpler and more type-safe version of static probe points named Tracepoints. A patch-set implementing Tracepoints was merged into version 2.6.28, which was released on December 25, 2008. Starting then, kernel markers were slowly removed from kernel sources and eventually fully removed in Linux kernel 2.6.32, which was released on December 3, 2009.
See also
Kernel debugger
References
External links
Jonathan Corbet, Kernel markers, LWN.net, 2007
Mathieu Desnoyers, Using the Linux Kernel Markers, Linux kernel documentation, 2008
Jonathan Corbet, Tracing: no shortage of options, LWN.net, 2008
Linux kernel features |
2994624 | https://en.wikipedia.org/wiki/Atari%20MEGA%20STE | Atari MEGA STE | The Atari Mega STE was Atari Corporation's final personal computer in the Atari ST series. Released in 1991, the MEGA STE is a late-model Motorola 68000-based STE mounted in the case of an Atari TT computer.
Description
The MEGA STE is based on STE hardware. The 2 MB and 4 MB models shipped with a high resolution mono monitor, and an internal SCSI hard disk (the 1 MB model included neither a monitor, hard disk, nor hard disk controller). While offering better ST compatibility than the TT, it also included a number of TT features, from the ST-grey version of the TT case with separate keyboard and system unit, optional FPU, a VMEbus slot, two extra RS232 ports (all 9-pin rather than 25-pin as previous models had), a LocalTalk/RS-422 port (no AppleTalk software was ever produced) and a 1.44 MB HD floppy support. Support for a third/middle mouse button was included, too.
A unique feature of the MEGA STE in relation to previous Atari systems is the software-switchable CPU speed, which allows the CPU to operate at 16 MHz for faster processing or 8 MHz for better compatibility with old software. An upgrade to the operating system was also produced after the first units were shipped that upgraded the onboard ROMs to TOS 2.05 and later to 2.6/2.06.
The VME bus provided expansion capability using cards that enhanced the computer's capabilities such as enhanced graphics processing capability and Ethernet network connectivity.
Technical specifications
CPU: Motorola 68000 @ 8 or 16 MHz with 16kB cache
FPU: Motorola 68881 or Motorola 68882
BLiTTER - graphics co-processor chip
RAM: 1, 2 or 4 MB ST RAM expandable to 4 MB using 30-pin SIMMs
Sound: Yamaha YM2149 + enhanced sound chip same as in Atari STe
Drive: 720 KB (first MEGA STE version) or 1.44 MB (later version) 3½" floppy disk drive
Ports: MIDI In/Out, 3 x RS-232, "Serial LAN" LocalTalk/RS-422, printer, monitor (RGB and Mono), RF modulator, extra disk drive port, ACSI, SCSI (ACSI/SCSI daughterboard), port, VMEbus inside case, detachable keyboard, joystick and mouse ports on keyboard
Operating System: TOS (The Operating System) with the Graphics Environment Manager (GEM) graphical user interface (GUI) TOS versions: 2.05 in ROM or 2.06 in ROM
Display modes: 320×200 (16 out of 4096 colors), 640×200 (4 out of 4096 colors), 640×400 (mono)
Character set: Atari ST character set (based on code page 437)
Case: Two-piece slimdesktop-style.
References
External links
Web page of Guillaume Tello What to do with a Mega STE?
Programs for the ST/TT family and technical articles (in french)
Atari MegaSTe Memory Cache
The MEGA STe review, 1992
68000-based home computers
Atari ST
Products introduced in 1991 |
23057675 | https://en.wikipedia.org/wiki/143rd%20Sustainment%20Command%20%28Expeditionary%29 | 143rd Sustainment Command (Expeditionary) | The 143rd Sustainment Command (Expeditionary)(formerly: 143rd Transportation Command), is one of seven general officer sustainment commands in the United States Army Reserve. It has command and control of more than 10,000 Army Reserve Soldiers throughout the southeastern United States in Alabama, Florida, Georgia, Louisiana, North Carolina, South Carolina, Tennessee, Arkansas and Mississippi. It is made up of more than 100 Army Reserve units whose missions are diverse and logistical in nature. The mission of the 143rd ESC is to provide command and control of sustainment forces and to conduct sustainment, deployment, redeployment and retrograde operations in support of U.S. and multinational forces. The mission of the 143rd when not deployed is to ensure readiness of the soldiers under its command and control.
The ESC is a peacetime subordinate to the 377th Theater Sustainment Command.
History
The 143rd Sustainment Command (Expeditionary) [referred to as an ESC] was originally constituted as the 143rd Transportation Command 24 November 1967 in the Army Reserve and activated 2 January 1968 in Orlando, Florida. It was reorganized and redesignated 16 October 1985 as the 143d Transportation Command. From 2003 to 2007, the 143d Transportation Command maintained a continuous presence in Southwest Asia in support of US Military Units engaged in Operations ENDURING FREEDOM and IRAQI FREEDOM. In a ceremony 17 September 2007, the 143rd Transportation Command cased its command colors for the last time signifying the end of the unit's era as a major transportation command headquarters. Immediately following, the new 143rd ESC Commanding General, Brigadier General Daniel I. Schultz, uncased the 143rd ESC colors, signifying the standup of this new logistics headquarters and the start of a new era for the 143rd.
Six months after the transition ceremony the 143rd ESC received a Department of the Army warning order for mobilization and deployment of the 143rd headquarters. Since receipt of the warning order, the 143rd ESC prepared for deployment by completing various Soldier readiness activities including soldier readiness processing, a sustainment training exercise conducted at Ft. Lee, Virginia and warrior training at the Regional Training Center, Ft. Hunter Liggett, California.
On 9 January 2009, the 143rd ESC deployed in support of the troop buildup in Afghanistan for Operation Enduring Freedom. The 143rd's deployment is the first time an ESC has deployed to Afghanistan. The mission of the 143d ESC during this deployment is to provide command and control of assigned forces, and to conduct sustainment, deployment, redeployment and retrograde operations in support of U.S. and multinational forces in the U.S. Central Command area of operations. In December 2009 the 143rd ESC turned over command of the Joint Sustainment Command-Afghanistan to the 135th Sustainment Command (Expeditionary).
In June 2013, the 143rd ESC once again mobilized in support of Operation Enduring Freedom and deployed 265 Soldiers to Kuwait and Afghanistan in support of the 1st Theater Sustainment Command and operations in the US Central Command area of operations. The unit assumed responsibility for operational sustainment in the ARCENT AOR in October 2013 from the 135th Sustainment Command (Expeditionary), and served as the senior operational sustainment headquarters in Kuwait until May 2014, when the unit redeployed, having transferred responsibility for operational sustainment to the 1st Sustainment Command (Theater).
Subordinate units
207th Regional Support Group, Fort Jackson, South Carolina
362nd Quartermaster Battalion (PETRL SUP), Winterville, North Carolina
828th Transportation Battalion (MOTOR), Livingston, Alabama
518th Sustainment Brigade, Knightdale, North Carolina
812th Transportation Battalion, Charlotte, North Carolina
641st Regional Support Group, St. Petersburg, Florida
257th Transportation Battalion (MVT CTL), Gainesville, Florida
332nd Transportation Battalion (TML), Tampa, Florida
642nd Regional Support Group, Decatur, Georgia
352nd Combat Sustainment Support Battalion, Macon, Georgia
787th Combat Sustainment Support Battalion, Dothan, Alabama
461st human resources command, Decatur, Georgia
352nd Combat Sustainment Support Battalion, Macon, Georgia
787th Combat Sustainment Support Battalion, Dothan, Alabama
Lineage
Constituted 24 November 1967 in the Army Reserve as Headquarters and Headquarters Company, 143d Transportation Brigade.
Activated 2 January 1968 at Orlando, Florida
Reorganized and redesignated 16 October 1985 as Headquarters and Headquarters Company, 143d Transportation Command
(Elements ordered into active military service 2003–2007 in support of the War on Terrorism)
Converted, reorganized, and redesignated 17 September 2007 as Headquarters and Headquarters Company, 143d Sustainment Command
Ordered into active military service 9 January 2009 at Orlando, Florida; released from active military service 12 February 2010 and reverted to reserve status
Ordered into active military service 14 June 2013 at Orlando, Florida; released from active military service 15 June 2014 and reverted to reserve status
Unit insignia
Shoulder sleeve insignia (SSI)
Description
On a brick red upright rectangle with a brick red border in height and in width overall, two golden yellow ribbands lined white with an arrowhead at each end interlaced and reversed at a 90-degree angle, fimbriated brick red.
Symbolism
Brick red and golden yellow are the colors used for Transportation units, the previous designation of the unit. The interlacing represents a strong support and simulates roads and viaducts, suggesting travel. The arrowheads denote leadership and a determined direction.
Background
The shoulder sleeve insignia was originally approved 24 October 1968 for the 143d Transportation Brigade. It was redesignated for the 143d Transportation Command on 16 October 1985, and amended to revise the description and symbolism. The insignia was redesignated effective 17 September 2007, for the 143d Sustainment Command with the description and symbolism updated.
Distinctive unit insignia (DUI)
Description
A gold color metal and enamel device in height overall consisting of an upright winged gold arrow with wings down, surmounted by a brick red annulet inscribed in the upper arc, "MOVEMENT" and on the lower "BRINGS VICTORY" in gold letters, the area within the annulet green.
Symbolism
Brick red and golden yellow (gold) are the colors used for Transportation, the previous designation of the unit and green is basic for "all traffic forward." The annulet simulates both a wheel, alluding to motor transport, and an enclosure, symbolizing a terminal. The arrow, a sign of direction, denotes controlled determination, and is used to represent the implements and armaments of warfare, while the wings relate to the unit's air transport aspects and symbolizes the speed in the organization's operations.
Background
The distinctive unit insignia was originally approved for the 143d Transportation Brigade on 13 January 1969. It was redesignated for the 143d Transportation Command on 16 October 1985 and amended to revise the description. The insignia was redesignated effective 17 September 2007, for the 143d Sustainment Command with the description and symbolism updated.
Unit honors
Meritorious Unit Commendation, Streamer Embroidered SOUTHWEST ASIA 2004-2005
Meritorious Unit Commendation, Streamer Embroidered SOUTHWEST ASIA 2009-2010
Meritorious Unit Commendation, Streamer Embroidered SOUTHWEST ASIA 2013-2014
References
External links
143rd Sustainment Command (Expeditionary) home page
Global Security: 143rd TRANSCOM
Military units and formations of the United States Army Reserve
143
Orlando International Airport |
65407767 | https://en.wikipedia.org/wiki/Dinosaurs%20in%20Jurassic%20Park | Dinosaurs in Jurassic Park | Jurassic Park, later known as Jurassic World, is an American science fiction adventure media franchise. It focuses on the cloning of dinosaurs through ancient DNA, extracted from mosquitoes that have been fossilized in amber. The franchise explores the ethics of cloning and genetic engineering, and the morals behind de-extinction.
The franchise began in 1990, with the release of Michael Crichton's novel Jurassic Park. A film adaptation, also titled Jurassic Park, was directed by Steven Spielberg and was released in 1993. Crichton then wrote a sequel novel, The Lost World (1995), and Spielberg directed its film adaptation, The Lost World: Jurassic Park (1997). Subsequent films have been released, including Jurassic Park III in 2001, completing the original trilogy of films. A fourth installment, Jurassic World, was released in 2015, marking the beginning of a new trilogy. Its sequel, Jurassic World: Fallen Kingdom, was released in 2018. A sixth film, Jurassic World Dominion, is scheduled for release in 2022, marking the conclusion of the second trilogy. Two Jurassic World short films have also been released: Battle at Big Rock (2019) and a Jurassic World Dominion prologue (2021).
Theropod dinosaurs like Tyrannosaurus rex and Velociraptor have had major roles throughout the film series. Other species, including Brachiosaurus and Spinosaurus, have also played significant roles. The series has also featured other creatures such as Mosasaurus and members of the pterosaur group, both commonly misidentified by the public as dinosaurs. The various creatures in the films were created through a combination of animatronics and computer-generated imagery (CGI). For the first three films, the animatronics were created by special-effects artist Stan Winston and his team, while Industrial Light & Magic (ILM) handled the CGI for all the films. The first film garnered critical acclaim for its innovations in CGI technology and animatronics. Since Winston's death in 2008, the practical dinosaurs have been created by other artists, including Legacy Effects and Image Engine (Jurassic World), Neal Scanlan (Jurassic World: Fallen Kingdom), and John Nolan (Jurassic World Dominion).
Paleontologist Jack Horner has served as the longtime scientific advisor on the films, and paleontologist Steve Brusatte was also consulted for Jurassic World Dominion. The original film was praised for its modern portrayal of dinosaurs. Horner said that it still contained many inaccuracies, but noted that it was not meant as a documentary. Later films in the series contain inaccuracies as well, for entertainment purposes. This includes the films' velociraptors, which are depicted as being larger than their real-life counterparts. In addition, the franchise's method for cloning dinosaurs has been deemed scientifically implausible, for a number of reasons.
On-screen portrayals
The various creatures in the Jurassic Park and Jurassic World films were created through a combination of animatronics and computer-generated imagery (CGI). For each of the films, Industrial Light & Magic (ILM) has handled dinosaur scenes that required CGI. Throughout the film series, ILM has studied large animals such as elephants and rhinos, for reference in designing the digital dinosaurs.
Jurassic Park series
For the original 1993 film Jurassic Park, director Steven Spielberg wanted to use practical dinosaurs as much as possible. He chose special-effects artist Stan Winston to create animatronic dinosaurs for the film, after seeing his work on the Queen Alien in the 1986 film Aliens. Winston said the Queen was easy compared to a dinosaur animatronic: "The queen was exoskeletal, so all of its surfaces were hard. There were no muscles, no flesh, and there was no real weight to it. The alien queen also didn't have to look like a real, organic animal because it was a fictional character -- so there was nothing in real life to compare it to. There was just no comparison in the difficulty level of building that alien queen and building a full-size dinosaur." Winston's team spent much time perfecting the animatronics, which used metal skeletons powered by electric motors. They molded latex skin that was then fitted over the robotic models, forming the exterior appearance. Up to 20 puppeteers were required to operate some of the dinosaurs. After filming concluded, most of the animatronics were disassembled.
For certain scenes, Spielberg had considered using go motion dinosaurs created by visual-effects artist Phil Tippett. Spielberg was disappointed with the results and opted for ILM's digital dinosaurs instead, although Tippett and his team of animators remained with the project to supervise the dinosaur movements. Tippett and ILM worked together to create the Dinosaur Input Device (DID), a robot shaped like a dinosaur skeleton. The DID included an array of sensors that captured various poses, which were then transferred into graphics software at ILM. Animatics and storyboards by Tippett were also used by the film crew as reference for action sequences. ILM based their CGI dinosaurs on Winston's models. Herds of dinosaurs were created through computer animation, using duplicate individuals which were slightly altered to give the illusion of multiple animals. The 127-minute film has 15 minutes of total screen time for the dinosaurs, including nine minutes of animatronics and six minutes of CGI animals. The film received critical acclaim for its innovations in CGI technology and animatronics. Among adults, the film generated an interest in dinosaurs, and it increased interest in the field of paleontology.
Winston and his team returned for the 1997 sequel, The Lost World: Jurassic Park, although the film relied more on CGI by ILM. The film features 75 computer-generated shots. While the first film showed that dinosaurs could be adequately recreated through special effects, the sequel raised the question of what could be done with the dinosaurs. Winston said, "I wanted to show the world what they didn't see in 'Jurassic Park': more dinosaurs and more dinosaur action. 'More, bigger, better' was our motto." Technology had not advanced much since the first film, although director Spielberg said that "the artistry of the creative computer people" had advanced: "There's better detail, much better lighting, better muscle tone and movement in the animals. When a dinosaur transfers weight from his left side to his right, the whole movement of fat and sinew is smoother, more physiologically correct." Besides animatronics, Winston's team also painted maquettes of dinosaurs that would subsequently be created through CGI.
Spielberg served as executive producer for each subsequent film. ILM and Winston returned for the 2001 film Jurassic Park III, directed by Joe Johnston. Winston's animatronics were more advanced than those used in previous films; they included the ability to blink, adding to the sense of realism. Animatronics were used for close-up shots. Winston's team took approximately 13 months to design and create the practical dinosaurs. The team also created dinosaur sculptures, which were then scanned by ILM to create the computer-generated versions of the animals.
Jurassic World series
Winston planned to return for a fourth film, which was ultimately released in 2015 as Jurassic World. Winston, who had been planning more-advanced special effects for the project, died in 2008 before the start of filming. Legacy Effects, founded by former members of Stan Winston Studios, provided an animatronic dinosaur for Jurassic World. Otherwise, the film's creatures were largely created through CGI, provided by ILM and Image Engine. New technology, such as subsurface scattering, allowed for greater detail in the creatures' skin and muscle tissue. According to Jurassic World director Colin Trevorrow, the film's animals were created from scratch because "technology has changed so much that everything is a rebuild". Some of the computer-generated creatures were created with motion capture, using human actors to perform the animals' movements. Jurassic World was the first dinosaur film to use motion capture technology.
ILM returned for the 2018 sequel, Jurassic World: Fallen Kingdom, which featured animatronics by special-effects artist Neal Scanlan. The film features more dinosaurs than any previous film, including several new ones not seen in earlier films. Jurassic World: Fallen Kingdom also features more animatronic dinosaurs than any previous sequel, and the animatronics used were more advanced than in previous films. Fallen Kingdom director J.A. Bayona said animatronics "are very helpful on set, especially for the actors so they have something to perform against. There's an extra excitement if they can act in front of something real."
Five animatronic dinosaurs were created for Fallen Kingdom, which features close interaction between humans and dinosaurs. Scanlan and his team of 35 people spent more than eight months working on the dinosaurs. Scanlan said animatronics were not best for every scene: "In some ways it will have an impact on your shooting schedule; you have to take time to film with an animatronic. In the balance, we ask ourselves if it is economically and artistically more valuable to do it that way, or as a post-production effect." Unlike the previous film, ILM determined that motion capture technology would not be adequate for depicting the film's dinosaurs.
The 2019 Jurassic World short film, Battle at Big Rock, utilized CGI and reference maquettes by ILM, and an animatronic by Legacy Effects.
The 2022 film Jurassic World Dominion used more animatronics than the previous Jurassic World films. Approximately 18 animatronics of varying sizes were created for the film, by designer John Nolan. In a departure from previous films, the dinosaurs were made of recyclable materials. ILM created various CGI dinosaurs for the film's five-minute prologue, released in 2021.
Scientific accuracy
Premise
The franchise's premise involves the cloning of dinosaurs through ancient DNA, extracted from mosquitoes that sucked the blood of such animals and were then fossilized in amber, preserving the DNA. Later research showed that this would not be possible due to the degradation of DNA over time. The oldest DNA ever found only dated back approximately 1 million years, whereas dinosaurs died 65 million years ago. It is also unlikely that dinosaur DNA would survive a mosquito's digestive process, and fragments of DNA would not be nearly enough to recreate a dinosaur. In addition, the type of mosquito used in the first film, Toxorhynchites rutilus, does not actually suck blood.
The premise presents other issues as well. Michael Crichton's 1990 novel Jurassic Park and its film adaptation both explain that gene sequence gaps were filled in with frog DNA, although this would not result in a true dinosaur, as frogs and dinosaurs are not genetically similar. Furthermore, the novel uses artificial eggs to grow the dinosaurs, while the film uses ostrich eggs, although neither would be suitable for development.
At the time of the first film's release, Spielberg said he considered the premise to be "science eventuality" rather than science fiction, although Crichton disagreed: "It never crossed my mind that it was possible. From the first moment of publication, I was astonished by the degree to which it was taken seriously in scientific circles". Microbiologists at the time considered the premise to be implausible. The film's dinosaur consultant, paleontologist Jack Horner, later said, "Even if we had dinosaur DNA, we don't know how to actually form an animal just from DNA. The animal cloning that we do these days is with a live cell. We don't have any dinosaur live cells. The whole business of having a dinosaur is a lot of fiction". Horner has instead proposed that a "Chickenosaurus" may be possible, by altering a chicken's DNA.
Dinosaurs
In creating Jurassic Park, Spielberg wanted to accurately portray the dinosaurs, and Horner was hired to ensure such accuracy. Tippett, a dinosaur enthusiast, also helped to keep the dinosaur portrayals realistic. The film followed the theory that dinosaurs had evolved into birds, and it was praised for its modern portrayal of dinosaurs, although Horner said that there were still many inaccuracies. However, he noted that the film is not a documentary and said he was "happy with having some fiction thrown in", stating, "My job was to get a little science into Jurassic Park, but not ruin it". Spielberg sought to portray the dinosaurs as animals rather than monsters, which changed the public perception of dinosaurs, although the sequels would have a deeper focus on rampaging dinosaurs. Horner said that in reality, "Visiting a dinosaur park would be like going to a wild animal park. As long as you keep your windows rolled up, nobody's going to bother you. But that doesn't make a very good movie".
Horner was involved throughout the production process. His consulting work included the supervision of the CGI dinosaurs, ensuring that they were life-like and scientifically accurate. Horner and Spielberg would discuss ways to combine scientific facts with fictional elements, the latter being for entertainment purposes. Horner said "if I could demonstrate that something was true or not true, then he would go with that, but if I had some question about it and we didn't really have much evidence about it, he would go with whatever he thought would make the best movie." Horner returned as a paleontological consultant for the next four films. For The Lost World: Jurassic Park, Spielberg largely followed Horner's advice regarding dinosaur accuracy, but some exceptions were made. Winston's team closely modelled the dinosaurs based on paleontological facts, or theories in certain cases where facts were not definitively known. In Jurassic Park III, the character Dr. Alan Grant, a paleontologist, states that the resurrected dinosaurs are not authentic but rather are "genetically engineered theme park monsters".
Before the release of Jurassic World, new research had shown that real dinosaurs were more colorful than they were in the films. Horner said that Spielberg "has made the point several times to me that colorful dinosaurs are not very scary. Gray and brown and black are more scary." Horner considered the colors to be the most inaccurate aspect of the films' dinosaurs. In addition, the dinosaurs are often depicting roaring, although paleontologists find this speculative or unrealistic. Horner said, "Dinosaurs gave rise to birds, and birds sing. I think most of the dinosaurs actually sang rather than growled."
Despite new dinosaur discoveries, the sequels largely kept the earlier dinosaur designs for continuity with the previous films. Paleontologists were disappointed with the outdated dinosaur portrayals in Jurassic World, including the lack of feathered dinosaurs, although they acknowledged that it is a work of fiction. Trevorrow said that Jurassic World was not meant as a documentary film: "It is very inaccurate — it's a sci-fi movie." The film itself includes a scene stating that any inaccuracies in the dinosaurs can be attributed to the fact that they are genetically engineered animals. Trevorrow noted that the dinosaurs in the franchise – going back to Crichton's novels Jurassic Park and The Lost World (1995) – were partially recreated with frog DNA, stating "those weren't 'real' dinosaurs, any of them." Tim Alexander, visual effects supervisor for ILM, said that colorful dinosaurs were excluded because they would look out of place in the film: "It's very forest greens and taupes and park rangers. And if we then throw a bright pink raptor in there, it's going to stick out and look a little weird".
For Jurassic World: Fallen Kingdom, ILM consulted with paleontologists and did extensive research to accurately depict the dinosaurs. Dinosaur expert John Hankla, of the Denver Museum of Nature and Science, served as an advisor on the film, and also provided several dinosaur fossil recreations for the film. Horner said that his own involvement on Fallen Kingdom was minimal. Horner was consulted again for Jurassic World Dominion, and paleontologist Steve Brusatte was also hired as a science consultant. Feathered dinosaurs appear in Jurassic World Dominion and its prologue.
Table of appearances
List of creatures
The following list includes on-screen appearances. Some animals listed here have also made prior appearances in the novels.
Ankylosaurus
Ankylosaurus first appears in Jurassic Park III, through brief appearances. It was created by ILM entirely through CGI.
Ankylosaurus also appears in Jurassic World, as Trevorrow considered the dinosaur to be among his favorites. It is one of several creatures that Trevorrow felt was deserving of a substantial scene. In the film, an Ankylosaurus is killed by the Indominus rex. Trevorrow stated that the dinosaur's death was an example of moments in the film "that are designed to really make these creatures feel like living animals that you can connect to. Especially since so many of the themes in the film involve our relationship with animals on the planet right now, I wanted them to feel real."
In Jurassic World: Fallen Kingdom, several Ankylosaurus flee from a volcanic eruption and at least one is captured by mercenaries. It is later auctioned off to a wealthy Indonesian. Several Ankylosaurus escaped the Lockwood Manor Estate grounds alongside the other dinosaurs.
Apatosaurus
In the novel Jurassic Park, Apatosaurus is the first group of dinosaurs seen on the island. It is replaced by Brachiosaurus in the film adaptation. Apatosaurus also appears in the sequel novel The Lost World, but is absent from its film adaptation.
Apatosaurus makes its first film appearance in Jurassic World, with several individuals being featured, including one depicted by an animatronic. Unlike earlier films which featured numerous animatronics, the Apatosaurus was the only one created for Jurassic World. Producer Patrick Crowley was initially hesitant to have an animatronic built because of the high cost, but Trevorrow persuaded him that fans of the series would enjoy it. The animatronic, built by Legacy Effects, consisted of a -long section of the dinosaur's neck and head. It was used for a close-up shot depicting the animal's death, after it had been injured in a dinosaur attack. Audio recordings of a Harris's hawk were used for the moans of the wounded Apatosaurus.
To animate the Apatosaurus, ILM used elephants as an example. Glen McIntosh, the animation supervisor for ILM, stated that "there are no existing animals that have such large necks, but in terms of the size and steps they're taking, elephants are an excellent example of that. Also the way their skin jiggles and sags. You also have impact tremors that rise up through their legs as they take steps." Originally, Legacy Effects only created a small model of the Apatosaurus for use in the film, but executive producer Steven Spielberg decided that a larger model would be better. The original model was scanned into a computer, allowing artists to create a larger 3-D model needed for the film. Apatosaurus makes appearances in the subsequent Jurassic World films.
Brachiosaurus
In the first Jurassic Park film, a Brachiosaurus is the first dinosaur seen by the park's visitors. The scene was described by Empire as the 28th most magical moment in cinema. A later scene depicts characters in a high tree, interacting with a Brachiosaurus. This scene required the construction of a 7.5-foot-tall puppet that represented the animal's upper neck and head. The film inaccurately depicts the species as having the ability to stand on its hind legs, allowing it to reach high tree branches. The dinosaur is also inaccurately depicted as chewing its food, an idea that was added to make it seem docile like a cow. Whale songs and donkey calls were used for the Brachiosaurus sounds, although scientific evidence showed that the real animal had limited vocal abilities. Brachiosaurus appears again in Jurassic Park III, created by ILM entirely through CGI.
Brachiosaurus returns in Jurassic World: Fallen Kingdom, including a scene in which one individual is stranded on Isla Nublar and dies in a volcanic eruption. Director J. A. Bayona stated that this Brachiosaurus is meant to be the same one that is first seen in the original Jurassic Park. For Fallen Kingdom, the Brachiosaurus was created using the same animations from the first film. The Brachiosaurus death was the last shot on the film to be finished. Bayona and the post-production team struggled to perfect the CGI, with only several days left to complete the scene. They worked through the final night to perfect the colors and composition, shortly before the film's release. Fans and film critics considered the dinosaur's death scene sad. Reviewers described its death as "poignant" or "haunting", particularly given the species' role in the first film.
Compsognathus
Procompsognathus appears in the novels, but is replaced by Compsognathus in the film series.
Their first film appearance is in The Lost World: Jurassic Park. In the film, the character Dr. Robert Burke, a paleontologist, identifies the dinosaur as Compsognathus triassicus, which in reality is a non-existent species; the film combined the names of Compsognathus longipes and Procompsognathus triassicus. In the film, Compsognathus are depicted as small carnivorous theropods which attack in packs.
The Compsognathus were nicknamed "Compies" by Winston's crew. Dennis Muren, the film's visual effects supervisor, considered Compsognathus the most complex digital dinosaur. Because of their small size, the Compies had their entire body visible onscreen and thus needed a higher sense of gravity and weight. A simple puppet of the Compsognathus was used in the film's opening scene, in which the dinosaurs attack a little girl. Later in the film, they kill the character Dieter Stark, who is played by Peter Stormare. For Stark's death scene, Stormare had to wear a jacket with numerous rubber Compies attached.
Compsognathus make brief appearances in all subsequent films, with the exception of Jurassic World. In the novels, Procompsognathus is depicted with the fictitious feature of a venomous bite, although such a trait is not mentioned regarding their onscreen counterparts. Compsognathus will return in the 2022 film Jurassic World Dominion.
Dilophosaurus
A fictionalized version of Dilophosaurus appears in the first novel and its film adaptation, both depicting it with the ability to spit venom. The film's Dilophosaurus also has a fictionalized neck frill that retracts, and the dinosaur was made significantly smaller to ensure that audiences would not confuse it with the velociraptors. While the real Dilophosaurus was thought to have stood at around 10 feet high, the animatronic was only four feet in height. In addition to the animatronic, a set of legs was also created for a shot in which the dinosaur hops across the screen. The Dilophosaurus scene was shot on a sound stage, and the animal's lower body portion was suspended from a catwalk with bungee cords. No CGI was used in creating the Dilophosaurus.
In both the novel and its film adaptation, a Dilophosaurus uses its venom on the character Dennis Nedry before killing him. The animatronic model was nicknamed "Spitter" by Winston's team. A paintball mechanism was used to spit the venom, which was a mixture of methacyl, K-Y Jelly, and purple food coloring. The film's idea of a neck frill came from a suggestion by concept artist John Gurche. The animatronic was made to support three interchangeable heads, depending on the position of the frill. The dinosaur's vocal sounds are a combination of a swan, a hawk, a howler monkey, and a rattlesnake.
Spielberg initially believed that the Dilophosaurus would be the easiest dinosaur to film, although the scene proved harder to shoot that he had expected. The scene is set during a storm, and the use of water to simulate the rain resulted in complications for the animal's puppeteer. A shot not included in the final film would have shown inflatable venom sacs, located under the animal's mouth. These would become visible as the dinosaur spits its venom, which would be expelled from the animatronic's mouth using compressed air. However, the atmosphere was cold and humid on-set, and the compressed air became visible under these conditions. Spielberg resolved the issue by cutting the scene to Nedry as the venom hits him, rather than showing it exiting the animal's mouth.
Dilophosaurus was popularized by its film appearance in Jurassic Park, but is considered the most fictionalized dinosaur in the film. Horner, in 2013, described Dilophosaurus as a good dinosaur to "make a fictional character out of, because I think two specimens are known, and both of them are really crappy. They're not preserved very well." Paleontologist Scott Persons later said that the Dilophosaurus is the most controversial dinosaur depiction in the film series: "A lot of paleontologists get very, very upset about Dilophosaurus."
In Jurassic World, a Dilophosaurus appears as a hologram in the theme park's visitor center. The dinosaur's venom is also referenced in a comedic tour video featured in the film, in which tour guide Jimmy Fallon is paralyzed by the venom.
A living Dilophosaurus was intended to appear in Jurassic World: Fallen Kingdom, but the scene was never filmed, as director Bayona decided that it was not necessary. The scene, set on board the Arcadia ship, would depict the characters Owen and Claire encountering a Dilophosaurus in a cage. Bayona believed that the Arcadia scenes were long enough already. Dilophosaurus appears in Jurassic World: Fallen Kingdom only as a diorama, on display at Benjamin Lockwood's estate.
Dilophosaurus will return in Jurassic World Dominion.
Dimorphodon
Dimorphodon, a type of pterosaur, appears in Jurassic World, marking its first appearance in the series. In the film, the species launch an attack on tourists after being released from an aviary. Through motion capture, dwarf actor Martin Klebba stood in as a Dimorphodon during a scene in which one of the creatures tries to attack Owen. A full-scale Dimorphodon head was also created. The sound of baby brown pelicans were used as the vocal effects for the dimorphodons.
Gallimimus
A group of running Gallimimus is featured in the first film, and is encountered by the character of Dr. Alan Grant along with Lex and Tim. The Gallimimus were created by ILM entirely through CGI. It was the first dinosaur to be digitized. The Gallimimus design was based on ostriches, and the animators also referred to footage of herding gazelles. In the ILM parking lot, animators were filmed running around to provide reference for the dinosaurs' run, with plastic pipes standing in for a fallen tree that the Gallimimus jump over. One of the animators fell while trying to make the jump, and this inspired the incorporation of a Gallimimus also falling. A portion of the scene depicts a Tyrannosaurus killing a Gallimimus, which was inspired by a scene in The Valley of Gwangi. Horse squeals were used to provide the Gallimimus vocal sounds.
Gallimimus returns in Jurassic World, in which a running herd is depicted during a tour. The scene is a reference to the dinosaur's appearance in the first film. This new Gallimimus scene was created by Image Engine. The company's artists often viewed the species' appearance in the first film for reference. Jeremy Mesana, the animation supervisor for Imagine Engine, said, "We were always going back and staring at that little snippet from the first film. It was always interesting trying to find the feeling of the Gallimimus. Trying to capture the same essence of that original shot was really tricky." By the time Jurassic World was created, scientists had found that Gallimimus had feathers, although this trait is absent from the film.
Giganotosaurus
Giganotosaurus is introduced in the 2021 Jurassic World Dominion prologue. It serves as the dinosaur antagonist in the prologue and the film itself. Trevorrow saved the Giganotosaurus for the third Jurassic World film to set up a rivalry between it and the T. rex. In the prologue, a Giganotosaurus kills a T. rex in battle during the Cretaceous, and two cloned versions face off in the subsequent film, set during the present day.
Indominus rex
Indominus rex is a fictional dinosaur in Jurassic World. It is a genetically modified hybrid (or transgenic) dinosaur, made up of DNA from various animals. It is created by the character Dr. Henry Wu to boost theme-park attendance. In the film, it is stated that the dinosaur's base genome is a T. rex, and that it also has the DNA of a Velociraptor, a cuttlefish, and a tree frog. The film's promotional website states that the creature also has the DNA of a Carnotaurus, a Giganotosaurus, a Majungasaurus, and a Rugops. Trevorrow said the mixed DNA allowed the animal to have attributes "that no dinosaur was known to have".
The Indominus is white in color. It can sense thermal radiation, and has the ability to camouflage itself thanks to its cuttlefish DNA. Carnotaurus was previously depicted in Crichton's novel The Lost World with the same ability to camouflage. Other characteristics of the Indominus include its long arms, raptor claws, and small thumbs. It is able to walk on four legs. ILM's animation supervisor, Glen McIntosh, said, "The goal was to always make sure she felt like a gigantic animal that was a theropod but taking advantage of its extra features." Therizinosaurus inspired the long forelimbs of the Indominus. Horner rejected an early idea that the dinosaur could be depicted as bulletproof, but he otherwise told Trevorrow to add any attributes that he wanted the animal to have. Trevorrow and Horner began with a list of possible characteristics and then gradually narrowed it down. Trevorrow said, "These kind of things were often decided by the needs of the narrative. If it was going to pick up a guy and bite his head off, it was going to need thumbs." Trevorrow wanted the Indominus to look like it could be an actual dinosaur, while Horner was disappointed that the dinosaur did not look more extreme, saying that he "wanted something that looked really different".
In an earlier draft of the script, the film's dinosaur antagonist was depicted as a real animal despite being a non-existent species in reality. Trevorrow chose to make the antagonist a genetically modified hybrid dinosaur named Indominus rex, maintaining consistency with earlier films which had generally incorporated the latest paleontological discoveries. He said, "I didn't wanna make up a new dinosaur and tell kids it was real". Fans were initially concerned upon learning that the film would feature a hybrid dinosaur, but Trevorrow said that the concept was "not tremendously different" from dinosaurs in earlier films, in which the animals were partially recreated with frog DNA. He described a hybrid dinosaur as "the next level", and said "we aren't doing anything here that Crichton didn't suggest in his novels." Horner considered the concept of transgenic dinosaurs to be the most realistic aspect of the film, saying it was "more plausible than bringing a dinosaur back from amber." However, a hybridized dinosaur made of various animals' DNA would still be exceedingly difficult to create, due to the complexity of altering the genomes.
Trevorrow said the behavior of the Indominus was partially inspired by the 2013 film Blackfish, saying that the dinosaur "is kind of out killing for sport because it grew up in captivity. It's sort of, like, if the black fish orca got loose and never knew its mother and has been fed from a crane." In the film, it is stated that there were initially two Indominus individuals, and that one cannibalized its sibling. Fifth scale maquettes of the Indominus rex were created for lighting reference. Motion capture was initially considered for portraying the Indominus, although Trevorrow felt that the method did not work well for the dinosaur. The animal sounds used to create the Indominus roars included those from big pigs, whales, beluga whales, dolphins, a fennec fox, lions, monkeys, and walruses.
The name Indominus rex is derived from the Latin words indomitus meaning "fierce" or "untameable" and rex meaning "king". The creature is sometimes referred to as the I. rex for short, although producer Frank Marshall stated that the film crew abbreviated the name as simply Indominus. Among the public, the Indominus rex was occasionally known during production as Diabolus rex, a name that Trevorrow made up to maintain secrecy on the film prior to its release.
In the film, the character Hoskins proposes making miniature versions of the Indominus as military weapons. The Indominus rex is killed during a battle with a T. rex, a Velociraptor, and a Mosasaurus.
In the sequel, Jurassic World: Fallen Kingdom, DNA is retrieved from the Indominus rex skeleton and is used alongside Velociraptor DNA to create the Indoraptor.
Indoraptor
Indoraptor is a fictional hybrid dinosaur in Jurassic World: Fallen Kingdom. It is made by combining the DNA from the Indominus rex and a Velociraptor. In the film, it is created by Dr. Henry Wu as a weaponized animal. The creature escapes at Benjamin Lockwood's estate and kills several people, before battling Blue, a Velociraptor. The Indoraptor eventually falls to its death when it is impaled on the horn of a ceratopsian skull, on display in Lockwood's library of dinosaur skeletons.
The Indoraptor has long human-like arms, which Spielberg considered to be the animal's scariest trait. It is depicted as a facultative biped with a height of approximately tall while standing on two legs. It is portrayed as long and weighing about . The front teeth and long claws were inspired by Count Orlok in Nosferatu. Bayona chose black for the dinosaur's color to give the appearance of a black shadow, saying "it's very terrifying when you see the Indoraptor in the dark because you can only see the eyes and the teeth." Initially, the film was to feature two Indoraptors, one black and one white. The black Indoraptor would kill the white one, in what Bayona considered similar to Cain and Abel. The white Indoraptor was ultimately removed from the script as the story was considered detailed enough without it.
The Indoraptor was primarily created through CGI, although close-up shots used a practical head, neck, shoulders, foot and arm. Neal Scanlan provided the animatronics. An inflatable Indoraptor stand-in, operated by two puppeteers on set, was used for some scenes, with CGI replacing it later in production. David Vickery, ILM's visual effects supervisor, said that Bayona wanted the Indoraptor to look "malnourished and slightly unhinged". The Indoraptor vocal sounds were created by combining noises from various types of animal, including chihuahua, pig, cougar, and lion. The sound of dental drills was also used.
Bayona incorporated elements from the 1931 film Frankenstein as he wanted to give the Indoraptor the feel of a "rejected creature". Bayona said, "There's something of that in the way we introduce the character, the Indoraptor, this kind of laboratory in the underground facilities at the end of a long corridor, inside a cell. It has this kind of Gothic element that reminds me a little bit of the world of Frankenstein, this kind of Gothic world. And we have also references of people with mental illness, like this kind of shake you see from time to time. It's kind of like a nervous tic that the Indoraptor has, and it's taken from real references of mentally ill people".
The Indoraptor is the last hybrid dinosaur of the Jurassic World trilogy.
Mosasaurus
Mosasaurus appears in Jurassic World, as the first aquatic reptile in the films. Earlier drafts for Jurassic Park III and Jurassic Park IV (later Jurassic World) had featured the aquatic reptile Kronosaurus. The Mosasaurus was suggested by Trevorrow, as part of a theme-park feeding show in which park-goers watch from bleachers as the animal leaps out of a lagoon and catches its prey: a shark hanging above the water. The park guests are then lowered in the bleacher seats for a view of the mosasaur's aquatic habitat.
The Mosasaurus was designed to resemble the dinosaurs designed by Winston for the earlier films. Trevorrow said, "We made sure to give her a look and a kind of personality in the way we designed her face that recalled Stan Winston's designs for many of the other dinosaurs in this world. She looks like a Jurassic Park dinosaur." Legacy Effects developed the original design for the Mosasaurus and ILM refined it. The animators referenced crocodiles for the creature's swimming pattern.
The Mosasaurus was originally designed as a 70-foot-long animal, but Spielberg requested that it be enlarged after seeing the initial design. ILM was concerned about making the animal appear too large, but the team was advised by Horner that an increased length would fit within the realm of possibility, as larger aquatic reptiles were consistently being discovered. The animal's length was increased to nearly 120 feet. Some criticized the Mosasaurus for appearing to be twice the size of the largest known species. Horner said "the size of this one is a little out of proportion, but we don't know the ultimate size of any extinct animal." The film inaccurately depicts the Mosasaurus with scutes along its back, a trait that was based on outdated depictions of the creature. Audio recordings of a walrus and a beluga whale provided the Mosasaurus roars.
The Mosasaurus returns in Jurassic World: Fallen Kingdom, in the opening and ending sequences. Compared with the previous film, the Mosasaurus is depicted as being larger in Fallen Kingdom. ILM animation supervisor Glen McIntosh cited this as an example of how "we sometimes have to fudge reality to make something work. From shot to shot, the mosasaurus often changed size slightly to make best use of each frame composition". Although Mosasaurus was thought to have had a forked tongue, McIntosh said that the fictional animal was given a regular tongue to make it "more believable to most filmgoers", saying that "we'd played with its scale so much that we felt giving it a forked tongue would be too much".
For both films, ILM referenced footage of breaching whales, which helped the team determine how to create realistic shots where the Mosasaurus leaps from the water. The Mosasaurus makes a brief return in the short film Battle at Big Rock, and will appear in Jurassic World Dominion.
Pachycephalosaurus
Pachycephalosaurus appears in The Lost World and its film adaptation. For the film, it was created as a five-foot-tall dinosaur measuring eight feet long. Three versions of the Pachycephalosaurus were created for filming: a full hydraulic puppet, a head, and a head-butter. The latter was built to withstand high impact for a scene in which the dinosaur head-butts one of the hunter vehicles using its domed skull. The puppet version was one of the most complex created for the film, and was used for a scene in which the dinosaur is captured. The legs of the puppet were controlled through pneumatics. Among the public, Pachycephalosaurus is the best-known member of the Pachycephalosauria clade, in part because of its appearance in The Lost World: Jurassic Park. Later research suggested that the animal's skull was not used for head-butting.
In Jurassic World, a Pachycephalosaurus briefly appears on a surveillance screen inside the park's control room.
Pteranodon
Pteranodon, a pterosaur, makes a brief appearance at the end of The Lost World: Jurassic Park. Earlier drafts of the script had featured Pteranodon in a larger role, and Spielberg insisted to Jurassic Park III director Joe Johnston that he include the creature in the third film. Pteranodon is prominently featured in Jurassic Park III, although it is a fictionalization of the actual animal, and it has a different appearance to those seen in The Lost World: Jurassic Park. In the third film, a group of Pteranodons are kept in an aviary on Isla Sorna. The idea of a pterosaur aviary had originated in Crichton's original Jurassic Park novel. An earlier draft of the film had included a storyline about Pteranodons escaping to the Costa Rican mainland and killing people there.
The Pteranodons in Jurassic Park III were created through a combination of animatronics and puppetry. Winston's team created a Pteranodon model with a wingspan of 40 feet, although the creatures are predominantly featured in the film through CGI. To create the flight movements, ILM animators studied footage of flying bats and birds, and also consulted a Pteranodon expert. Winston's team also designed and created five rod puppets to depict baby Pteranodons in a nest, with puppeteers working underneath the nest to control them. The third film ends with a shot of escaped Pteranodons flying away from the island. Johnston wanted an ending shot of "these creatures being beautiful and elegant". He denied, then later suggested, that the fleeing Pteranodons would be included in the plot for a fourth film. Promotional material for the Jurassic World films later explained that the escaped Pteranodons were killed off-screen after reaching Canada.
Another variation of Pteranodon is featured in Jurassic World, which also depicts them living in an aviary. They are later inadvertently freed by the Indominus rex and wreak havoc on the park's tourists. For Jurassic World, the Pteranodon vocal effects were created using audio recordings of a mother osprey, defending her chicks against another individual.
Pteranodons make an appearance in a post-credits scene for Jurassic World: Fallen Kingdom. The scene is set at the Paris Las Vegas resort, where escaped Pteranodons land atop the resort's Eiffel Tower.
A Pteranodon makes a brief appearance in the short film Battle at Big Rock, and several individuals appear in the Jurassic World Dominion prologue, as well as the main film.
The films depict Pteranodon with the ability to pick up humans using its feet, although the actual animal would not have been able to do this.
Spinosaurus
Spinosaurus is introduced in Jurassic Park III and appears throughout the film, which popularized the animal. After the two previous movies, the filmmakers wanted to replace the T. rex with a new dinosaur antagonist. Baryonyx was originally considered, before Horner convinced the filmmakers to go with his favorite carnivorous dinosaur: Spinosaurus, an animal larger than the T. rex. Spinosaurus had a distinctive sail on its back; Johnston said, "A lot of dinosaurs have a very similar silhouette to the T-Rex ... and we wanted the audience to instantly recognize this as something else".
Winston's team created the Spinosaurus over a 10-month period, beginning with a 1/16 maquette version. This was followed by a 1/5 scale version with more detail, and eventually the full-scale version. The Spinosaurus animatronic was built from the knees up, while full body shots were created through CGI. The animatronic measured 44 feet long, weighed 13 tons, and was faster and more powerful than the 9-ton T. rex. Winston and his team had to remove a wall to get the Spinosaurus animatronic out of his studio. It was then transported by flatbed truck to the Universal Studios Lot, where a sound stage had to be designed specifically to accommodate the large dinosaur. The Spinosaurus was placed on a track that allowed the creature to be moved backward and forward for filming. Four Winston technicians were required to fully operate the animatronic. It had 1,000 horsepower, compared to the T. rex which operated at 300 horsepower. Johnston said, "It's like the difference between a family station wagon and a Ferrari". For a scene in which the Spinosaurus stomps on a crashed airplane, Winston's team created a full-scale Spinosaurus leg prop, controlled by puppeteers. The leg, suspended in the air by two poles, was slammed down into a plane fuselage prop for a series of shots.
The film's Spinosaurus was based on limited records suggesting what the actual animal had looked like. A scene in the film depicts the Spinosaurus swimming, an ability that the real animal was believed to have possessed at the time. Later research proved this theory, suggesting that the animal was primarily an aquatic dinosaur, whereas the film version was depicted largely as a land animal.
In Jurassic Park III, the Spinosaurus kills a T. rex during battle. Some fans of the Jurassic Park series were upset with the decision to kill the T. rex and replace it. Horner later said that the dinosaur would not have won against a T. rex, believing it was likely that Spinosaurus only ate fish. An early script featured a death sequence for the Spinosaurus near the end of the film, as the character Alan Grant would use a Velociraptor resonating chamber to call a pack of raptors which would attack and kill it.
A skeleton of Spinosaurus is featured in Jurassic World, on display in the theme park. The skeleton is later destroyed when a T. rex is set free and smashes through it, meant as revenge for the earlier scene in Jurassic Park III.
Stegoceratops
Stegoceratops is a hybrid dinosaur with the body of a Stegosaurus and the head of a Triceratops. It makes only a brief appearance near the end of Jurassic World, when an image of the dinosaur is visible on a computer screen in Dr. Henry Wu's laboratory. An early draft of the film had a scene where Owen and Claire came across the Stegoceratops in the jungle on Isla Nublar. The Stegoceratops would have joined the Indominus rex as a second hybrid dinosaur. However, Trevorrow decided to remove the animal from the final script after his son convinced him that having multiple hybrids would make the Indominus less unique.
Although the dinosaur is largely removed from the film, a toy version was still released by Hasbro, which produced a toyline based on Jurassic World. Trevorrow, discussing his decision to remove the Stegoceratops, said, "The idea that there was more than one made it feel less like the one synthetic among all the other organics, and suddenly it seemed entirely wrong to have it in the movie. I suddenly hated the idea but the toy still exists as a kind of remnant because Hasbro toys are locked a year out." The dinosaur also appears in the video games Jurassic World: The Game (2015), Jurassic World Alive (2018) and Jurassic World Evolution (2018). Outside of the franchise, a Stegoceratops had also appeared in the film Yor, the Hunter from the Future (1983) where it was coincidentally referred to by that name.
Stegosaurus
Stegosaurus appears in the Jurassic Park novel but was replaced by Triceratops for the film adaptation. The dinosaur's name (misspelled as "Stegasaurus") is seen on an embryo cooler label in the film, but the animal is otherwise absent. Stegosaurus instead made its film debut in The Lost World: Jurassic Park, after writer David Koepp took a suggestion from a child's letter to include the dinosaur. According to Spielberg, Stegosaurus was included due to "popular demand". In the film, a group of adult Stegosaurus attack Dr. Sarah Harding when they spot her taking pictures of their baby, believing that she is trying to harm it. Stegosaurus is among other dinosaurs that are captured later in the film.
Full-sized versions of an adult and infant Stegosaurus were built by Winston's team, although Spielberg later opted for a digital version of the adults, so they could be more mobile. Winston's adult stegosaurs were 26 feet long and 16 feet tall. The adults were not used due to mobility issues and safety concerns. Winston's adult Stegosaurus is only shown in a brief shot, in which the animal is caged. The baby Stegosaurus was eight feet long and weighed 400 pounds.
Stegosaurus has appeared briefly in each film since then. For Jurassic World, ILM studied the movements of rhinos and elephants, and copied their movements when animating the Stegosaurus. The film inaccurately depicts Stegosaurus dragging its tail near the ground, unlike previous films.
The animal makes a brief return in the short film Battle at Big Rock.
Stygimoloch
Stygimoloch is introduced in Jurassic World: Fallen Kingdom, and was included for comic relief. Its vocal sounds were a combination of dachshund, camel, and pig noises. Sound designer Al Nelson said, "It created this sweet, gurgling kind of thing that fits perfectly with this funny little creature". Horner was surprised by the inclusion of Stygimoloch, whose existence was considered doubtful by him and other paleontologists; they believed the animal to actually be a juvenile form of Pachycephalosaurus rather than a separate dinosaur. Like Pachycephalosaurus, the Stygimoloch had a domed skull, which it uses in the film to smash through a brick wall.
Triceratops
Triceratops makes an appearance in the first film as a sick dinosaur, taking the place of the novel's Stegosaurus. Triceratops was a childhood-favorite of Spielberg's. The Triceratops was portrayed through an animatronic created by Winston's team. Winston was caught off-guard when Spielberg decided to shoot the Triceratops scene sooner than expected. It took eight puppeteers to operate the animatronic. The Triceratops would end up being the first dinosaur filmed during production. Aside from the adult Triceratops, a baby had also been created for the character of Lex to ride around on, but this scene was cut to improve the film's pacing. To create the Triceratops vocals, sound designer Gary Rydstrom breathed into a cardboard tube and combined the sound with that of cows near his workplace at Skywalker Ranch.
Triceratops makes brief appearances in each of the subsequent films. In The Lost World: Jurassic Park, a baby Triceratops was created by Winston's team for a shot depicting the animal in a cage. For its appearance in Jurassic World, the ILM animators studied rhinos and elephants, as they did with the Stegosaurus. In the film, Triceratops is depicted galloping, although the real animal was sluggish and would not have been able to do so.
An adult and baby Triceratops appear in Jurassic World: Fallen Kingdom.
Tyrannosaurus
Tyrannosaurus rex is the primary dinosaur featured in the novels and throughout the film series. For the first film, Winston's team created an animatronic T. rex that stood , weighed , and was long. At the time, it was the largest sculpture ever made by Stan Winston Studio. The studio building had to be modified for the construction of the animatronic. Horner called it "the closest I've ever been to a live dinosaur". The animatronic was used in a scene set during a storm, depicting the T. rex as it breaks free from its enclosure. Shooting the scene was difficult because the foam rubber skin of the animatronic would absorb water, causing the dinosaur to shake from the extra weight. In between takes, Winston's team had to dry off the dinosaur in order to proceed with filming. Winston's team initially created a miniature sculpture of the T. rex, serving as a reference for the construction of the full-sized animatronic. ILM also scanned the miniature sculpture for CGI shots of the animal.
One scene in the film depicts the T. rex chasing a Jeep. Animator Steve Williams said he decided to "throw physics out the window and create a T. rex that moved at sixty miles per hour even though its hollow bones would have busted if it ran that fast". In the film, it is stated that the T. rex has been recording running as fast as 32 miles per hour, although scientists believe that its actual top speed would have ranged from 12 to 25 miles per hour. In the novel and its film adaptation, it is stated that the T. rex has vision based on movement. However, later studies indicate that the dinosaur had binocular vision, like a bird of prey. The T. rex roar was created by combining the sounds of a baby elephant, a tiger, and an alligator.
In the first film, the T. rex was originally supposed to be killed off. Halfway through filming, Spielberg realized that the T. rex was the star of the film and decided to have the script changed just before shooting the death scene. The changes resulted in the final ending, in which the T. rex inadvertently saves the human characters by killing a pack of velociraptors. Spielberg had the ending changed out of fear that the original ending, without the T. rex, would disappoint audiences.
A Tyrannosaurus family is featured in the film sequel The Lost World: Jurassic Park. The original T. rex animatronic from the first film was re-used for the sequel, and Winston's team also built a second adult. The adult animatronics were built from head to mid-body, while full body shots were created through CGI. The animatronics weighed nine tons each and cost $1 million apiece.
Michael Lantieri, the film's special effects supervisor, said, "The big T. rex robot can pull two Gs of force when it's moving from right to left. If you hit someone with that, you'd kill them. So, in a sense, we did treat the dinosaurs as living, dangerous creatures." The adult animatronics were used for a scene in which the dinosaurs smash their heads against a trailer, causing authentic damage to the vehicle rather than using computer effects. As part of this sequence, an 80-foot track was built into the sound stage floor, allowing the T. rexes to be moved backward and forward. The adult T. rexes could not be moved from their location on the sound stage, so new sets had to be built around the animatronics as filming progressed. Animatronics were primarily used for a scene in which the T. rexes kill the character Eddie, with the exception of two CGI shots: when the animals emerge from the forest and when they tear Eddie's body in half. Otherwise, animatronics were used for shots in which the animals tear the vehicle apart to get to Eddie. Filming the scene with the animatronics required close collaboration with a stunt coordinator. An animatronic T. rex was also used in scenes depicting the deaths of Dr. Robert Burke and Peter Ludlow.
As in the novel The Lost World, a baby T. rex is also depicted in the film adaptation, through two different practical versions, including a remote-controlled version for the actors to carry. A second, hybrid version was operated by hydraulics and cables; this version was used during a scene in which the dinosaur lays on an operating table while a cast is set on its broken leg. Weeks before filming began, Spielberg decided to change the ending to have an adult T. rex rampage through San Diego, saying, "We've gotta do it. It's too fun not to."
A T. rex appears only briefly in Jurassic Park III, which instead uses Spinosaurus as its primary dinosaur antagonist. In the film, a T. rex is killed in a battle against a Spinosaurus.
A T. rex appears in Jurassic World and is meant to be the same individual that appeared in the first film. Trevorrow said "we took the original design and obviously, technology has changed. So, it's going to move a little bit differently, but it'll move differently because it's older. And we're giving her some scars and we're tightening her skin. So, she has that feeling of, like, an older Burt Lancaster." Motion capture was used to portray the T. rex, and a full scale foot was created for lighting reference and to help with framing shots. Following the film's release, fans began referring to the individual as "Rexy". Phil Tippett had worked on storyboards for the original film and had referred to the T. rex as "Roberta".
The same T. rex returns in Jurassic World: Fallen Kingdom. For its appearance, ILM sent Neal Scanlan the T. rex model previously used for Jurassic World. Using the model, Scanlan created a full-scale 3D print of the T. rex head and shoulders. The life-sized T. rex animatronic, which had the ability to breathe and move its head, was controlled with joysticks. It was used for a scene where the sedated T. rex is inside a cage, while Owen and Claire attempt to retrieve blood from it for a transfusion. The beginning shots of the scene were created using only the animatronic, while the ending shots solely used CGI. The middle portion of the scene used a combination of the two methods. Trevorrow said about the dinosaur, "We've been following this same character since the beginning; she's the same T. rex that was in Jurassic Park and in Jurassic World. She is iconic—not just because she's a T. rex, but because she's this T. rex." The same T. rex appears in the 2022 film Jurassic World Dominion, and its 2021 prologue, which Trevorrow described as an origin story for the T. rex "in the way we might get to do in a superhero film. The T-Rex is a superhero for me."
The physical appearance of the T. rex in the early Jurassic World films is contrary to new discoveries about the dinosaur. For consistency, the films have also continued to depict the dinosaur with its wrists pointing downward at an unnatural angle, whereas the real animal had its wrists facing sideways toward each other. The Jurassic World Dominion prologue features the animal lightly covered in protofeathers.
Velociraptor
Velociraptor is depicted in the franchise as an intelligent pack hunter. It has major roles in the novels and the films, both of which depict it as being bigger than its real-life counterpart. The franchise's Velociraptors are actually based on the Deinonychus, but are larger than the latter. In writing Jurassic Park, Crichton was partly inspired by Gregory S. Paul's 1988 book Predatory Dinosaurs of the World, which mislabeled Deinonychus as a Velociraptor species.
John Ostrom, who discovered Deinonychus, was also consulted by Crichton for the novel, and later by Spielberg for the film adaptation. Ostrom said that Crichton based the novel's Velociraptors on Deinonychus in "almost every detail", but ultimately chose the name Velociraptor because he thought it sounded more dramatic. Crichton's version of the animal, depicted at tall, was carried over into the film adaptation. The film also states that Velociraptors are long. The Utahraptor, however, was a more accurate dinosaur in size, length, and height comparison to the franchise's Velociraptors; it was discovered shortly before the 1993 release of Jurassic Parks film adaptation. Winston joked, "After we created it, they discovered it." Like their fictional counterparts, real raptors are believed to have been intelligent and may have been pack hunters.
In the first film, the raptors were created through a combination of animatronics and CGI. A fully functioning raptor head took four months to create. The creature was also depicted by men in suits for certain scenes, including the death of character Robert Muldoon, who is mauled by one. John Rosengrant, a member of Winston's team, had to bend over to fit inside the raptor suit for a scene set in a restaurant kitchen. Filming lasted up to four hours at a time; Rosengrant said, "My back would go out after about 30 minutes, and that was after having trained a couple of hours a day for weeks." Part of the kitchen scene was initially going to depict the raptors with forked tongues, like snakes. Horner objected to this, saying it would have been scientifically inaccurate, in part because it would imply a link with cold-blooded reptiles. Instead, Spielberg opted to feature a raptor snorting onto a kitchen-door window, fogging it up. This would keep with the idea that dinosaurs were warm-blooded. The various raptor vocals were created by combining the sounds of dolphin screams, walruses bellowing, geese hissing, an African crane's mating call, tortoises mating, and human rasps.
Velociraptor has appeared in each subsequent film. In The Lost World: Jurassic Park, a mechanical version of the raptor was created to depict the animal's upper body. A full-motion raptor was also created through CGI. In addition to the regular raptors, a "super-raptor" had also been considered for inclusion in the film, but Spielberg rejected it, saying it was "a little too much out of a horror film. I didn't want to create an alien."
In the first film, Muldoon states that the raptors are extremely intelligent. Jurassic Park III depicts them as being smarter than previously realized, with the ability to communicate among each other through their resonating chambers. This was inspired by the theory that other dinosaurs, such as Parasaurolophus, were capable of sophisticated communication. Johnston said "it's not completely outlandish that a raptor using soft tissue in its nasal area could produce some kind of sound and communicate in much the same way that birds do. There's all kinds of evidence of lots of different species of animals communicating. So, I don't think we were breaking any rules there or creating something that was scientifically impossible". The new raptor vocals were created from bird sounds. Velociraptor animatronics were used for Jurassic Park III, and a partial raptor suit was also made for a scene depicting the death of Udesky. Before the release of Jurassic Park III, most paleontologists theorized that Velociraptor had feathers like modern birds. For the third film, the appearance of the male raptors was updated to depict them with a row of small quills on their heads and necks, as suggested by Horner.
Paleontologist Robert T. Bakker, who was an early pioneer of the dinosaur-bird connection, said in 2004 that the feather quills in Jurassic Park III "looked like a roadrunner's toupee", although he noted that feathers were difficult to animate. He speculated that the raptors in the upcoming Jurassic Park IV would have more realistic plumage. Jurassic Park IV, ultimately released as Jurassic World, does not feature feathered Velociraptors, maintaining consistency with earlier films. Horner said "we knew Velociraptor should have feathers and be more colorful, but we couldn't really change that look because everything goes back to the first movie." Velociraptor is also depicted holding its front limbs in an outdated manner, not supported by scientific findings. Research has also found that the real animal lacked the flexible tails and snarling facial expressions that are depicted in the film.
At Spielberg's suggestion, the fourth film includes a plot about four raptors being trained by a dinosaur researcher, Owen Grady (portrayed by Chris Pratt). When Trevorrow joined the project as director, he felt that the plot aspect of trained raptors was too extreme, as it depicted the animals being used for missions. Trevorrow reduced the level of cooperation that the raptors would have with their trainer. Early in the film, the raptors are being trained to not eat a live pig located in their enclosure; Trevorrow said that this "was as far as we should be able to go" with the concept of trained raptors. Owen's relationship with the raptors was inspired by real-life relationships that humans have with dangerous animals such as lions and alligators.
In Jurassic World, the raptors were created primarily through motion capture. A full-sized raptor model from the first film was also provided by Legacy Effects to ILM as a reference. The model weighed approximately and measured approximately tall and long. Life-size maquettes were also used during scenes in which the raptors are caged. Audio recordings of penguins and toucans provided the raptor vocals. The sound effects of the raptors moving around were created by Benny Burtt, who attached microphones to his shoelaces and tromped around Skywalker Ranch, the film's sound-recording facility.
Several raptors are killed in Jurassic World, leaving only one survivor, a female individual named Blue.
In Jurassic World: Fallen Kingdom, Owen's past bond with Blue prompts him to join a mission to save her and other dinosaurs from Isla Nublar, after the island's volcano becomes active. For the film, Neal Scanlan's team created a Blue animatronic that was laid on an operating table, for a scene depicting the animal after an injury. The animatronic was operated by a dozen puppeteers hidden under the table. The scene was shot with and without the animatronic, and the two versions were combined during post-production. Modified penguin noises were used during this scene to provide a purring sound for Blue.
To create Blue's CGI appearances, the ILM animators referred to the previous film. David Vickery of ILM said that Blue's movements were designed to resemble a dog: "You look at the way Blue cocks her head and looks up at you. It's exactly like a dog. You're trying to sort of connect the dinosaur with things that you understand as a human." Small puppets were also used to depict Owen's raptors as babies. John Hankla, an advisor for Jurassic World: Fallen Kingdom, provided an accurately sized Velociraptor skeleton that appears in the background at the Lockwood Estate's library of dinosaur skeletons. It is the first accurately sized Velociraptor to appear in the franchise.
Blue is the focus of a two-part virtual reality miniseries, titled Jurassic World: Blue. It was released for Oculus VR headsets as a Fallen Kingdom tie-in. It depicts Blue on Isla Nublar at the time of the volcanic eruption.
Other creatures
In the first film, a skeleton of Alamosaurus is present in the Jurassic Park visitor center. Parasaurolophus made a brief debut in the first film, and has appeared in each one since then, including the short film Battle at Big Rock.
Mamenchisaurus appears briefly in The Lost World: Jurassic Park as one of the dinosaurs chased by Peter Ludlow's group. The Mamenchisaurus design was based on a maquette created by Winston's team. ILM then took the Brachiosaurus model from the first film and altered it to portray the Mamenchisaurus, which was fully computer-generated.
Ceratosaurus and Corythosaurus are introduced in Jurassic Park III, through brief appearances.
Allosaurus, Baryonyx, Carnotaurus, and Sinoceratops are introduced in Jurassic World: Fallen Kingdom. Baryonyx and Carnotaurus were among dinosaurs created through CGI. The Carnotaurus vocal sounds were made from orangutan noises, as well as Styrofoam which was scrapped with a double-bass bow. Sinoceratops makes several appearances in the film, including a scene in which the dinosaur is shown licking Owen after he has been sedated. Animator Jance Rubinchik described this as the dinosaur's motherly instinct to save Owen. The scene was shot using a prop tongue.
In Jurassic World: Fallen Kingdom, the skull of an unnamed ceratopsian is kept on display in Benjamin Lockwood's estate. Production designer Andy Nicholson said "When it came to the ceratopsian skull which takes centre stage in Lockwood Manor, we were quite conscious that it couldn’t be a Triceratops because it wouldn’t have been big enough to kill the Indoraptor. With that in mind, we created a new genus which was an amalgamation of two different ceratopsians." Several creatures appear in the film as dioramas, on display in Lockwood's estate. These include Concavenator, Dimetrodon, and Mononykus.
Allosaurus returns in Battle at Big Rock, which also introduces Nasutoceratops. Jurassic World Dominion will introduce Atrociraptor, Pyroraptor, and Therizinosaurus. Lystrosaurus, a therapsid rather than a dinosaur, will also appear in Jurassic World Dominion, being portrayed with the use of an animatronic. The film's prologue introduces several other new creatures, including Dreadnoughtus, Iguanodon, Oviraptor, and the pterosaur Quetzalcoatlus. It also features Moros intrepidus, a small, feathered member of the tyrannosaur family that was described in 2019. Moros Intrepidus also appears in the film itself, along with Quetzalcoatlus. Returning dinosaurs include Allosaurus, Carnotaurus, and Nasutoceratops.
Notes
References
Bibliography
External links
Dinosaur profiles at JurassicWorld.com
The Science of Jurassic Park and the Lost World (1997) by Rob DeSalle and David Lindley
Jurassic Park
Fictional endangered and extinct species
Lists of fictional reptiles and amphibians
Lists of fictional animals by work
Fictional clones
Fictional dinosaurs |
378193 | https://en.wikipedia.org/wiki/Coprocessor | Coprocessor | A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I/O interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it.
Functionality
Coprocessors vary in their degree of autonomy. Some (such as FPUs) rely on direct control via coprocessor instructions, embedded in the CPU's instruction stream. Others are independent processors in their own right, capable of working asynchronously; they are still not optimized for general-purpose code, or they are incapable of it due to a limited instruction set focused on accelerating specific tasks. It is common for these to be driven by direct memory access (DMA), with the host processor (a CPU) building a command list. The PlayStation 2's Emotion Engine contained an unusual DSP-like SIMD vector unit capable of both modes of operation.
History
To make best use of mainframe computer processor time, input/output tasks were delegated to separate systems called Channel I/O. The mainframe would not require any I/O processing at all, instead would just set parameters for an input or output operation and then signal the channel processor to carry out the whole of the operation. By dedicating relatively simple sub-processors to handle time-consuming I/O formatting and processing, overall system performance was improved.
Coprocessors for floating-point arithmetic first appeared in desktop computers in the 1970s and became common throughout the 1980s and into the early 1990s. Early 8-bit and 16-bit processors used software to carry out floating-point arithmetic operations. Where a coprocessor was supported, floating-point calculations could be carried out many times faster. Math coprocessors were popular purchases for users of computer-aided design (CAD) software and scientific and engineering calculations. Some floating-point units, such as the AMD 9511, Intel 8231/8232 and Weitek FPUs were treated as peripheral devices, while others such as the Intel 8087, Motorola 68881 and National 32081 were more closely integrated with the CPU.
Another form of coprocessor was a video display coprocessor, as used in the Atari 8-bit family, the Texas Instruments TI-99/4A and MSX home-computers, which were called "Video Display Controllers". The Commodore Amiga custom chipset included such a unit known as the Copper, as well as a Blitter for accelerating bitmap manipulation in memory.
As microprocessors developed, the cost of integrating the floating point arithmetic functions into the processor declined. High processor speeds also made a closely integrated coprocessor difficult to implement. Separately packaged mathematics coprocessors are now uncommon in desktop computers. The demand for a dedicated graphics coprocessor has grown, however, particularly due to an increasing demand for realistic 3D graphics in computer games.
Intel
The original IBM PC included a socket for the Intel 8087 floating-point coprocessor (aka FPU) which was a popular option for people using the PC for computer-aided design or mathematics-intensive calculations. In that architecture, the coprocessor speeds up floating-point arithmetic on the order of fiftyfold. Users that only used the PC for word processing, for example, saved the high cost of the coprocessor, which would not have accelerated performance of text manipulation operations.
The 8087 was tightly integrated with the 8086/8088 and responded to floating-point machine code operation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 could not interpret these instructions, requiring separate versions of programs for FPU and non-FPU systems, or at least a test at run time to detect the FPU and select appropriate mathematical library functions.
Another coprocessor for the 8086/8088 central processor was the 8089 input/output coprocessor. It used the same programming technique as 8087 for input/output operations, such as transfer of data from memory to a peripheral device, and so reducing the load on the CPU. But IBM didn't use it in IBM PC design and Intel stopped development of this type of coprocessor.
The Intel 80386 microprocessor used an optional "math" coprocessor (the 80387) to perform floating point operations directly in hardware. The Intel 80486DX processor included floating-point hardware on the chip. Intel released a cost-reduced processor, the 80486SX, that had no floating point hardware, and also sold an 80487SX coprocessor that essentially disabled the main processor when installed, since the 80487SX was a complete 80486DX with a different set of pin connections.
Intel processors later than the 80486 integrated floating-point hardware on the main processor chip; the advances in integration eliminated the cost advantage of selling the floating point processor as an optional element. It would be very difficult to adapt circuit-board techniques adequate at 75 MHz processor speed to meet the time-delay, power consumption, and radio-frequency interference standards required at gigahertz-range clock speeds. These on-chip floating point processors are still referred to as coprocessors because they operate in parallel with the main CPU.
During the era of 8- and 16-bit desktop computers another common source of floating-point coprocessors was Weitek. These coprocessors had a different instruction set from the Intel coprocessors, and used a different socket, which not all motherboards supported. The Weitek processors did not provide transcendental mathematics functions (for example, trigonometric functions) like the Intel x87 family, and required specific software libraries to support their functions.
Motorola
The Motorola 68000 family had the 68881/68882 coprocessors which provided similar floating-point speed acceleration as for the Intel processors. Computers using the 68000 family but not equipped with the hardware floating point processor could trap and emulate the floating-point instructions in software, which, although slower, allowed one binary version of the program to be distributed for both cases. The 68451 memory-management coprocessor was designed to work with the 68020 processor.
Modern coprocessors
, dedicated Graphics Processing Units (GPUs) in the form of graphics cards are commonplace. Certain models of sound cards have been fitted with dedicated processors providing digital multichannel mixing and real-time DSP effects as early as 1990 to 1994 (the Gravis Ultrasound and Sound Blaster AWE32 being typical examples), while the Sound Blaster Audigy and the Sound Blaster X-Fi are more recent examples.
In 2006, AGEIA announced an add-in card for computers that it called the PhysX PPU. PhysX was designed to perform complex physics computations so that the CPU and GPU do not have to perform these time-consuming calculations. It was designed for video games, although other mathematical uses could theoretically be developed for it. In 2008, Nvidia purchased the company and phased out the PhysX card line; the functionality was added through software allowing their GPUs to render PhysX on cores normally used for graphics processing, using their Nvidia PhysX engine software.
In 2006, BigFoot Systems unveiled a PCI add-in card they christened the KillerNIC which ran its own special Linux kernel on a FreeScale PowerQUICC running at 400 MHz, calling the FreeScale chip a Network Processing Unit or NPU.
The SpursEngine is a media-oriented add-in card with a coprocessor based on the Cell microarchitecture. The SPUs are themselves vector coprocessors.
In 2008, Khronos Group released the OpenCL with the aim to support general-purpose CPUs, ATI/AMD and Nvidia GPUs (and other accelerators) with a single common language for compute kernels.
In 2010s, some mobile computation devices had implemented the sensor hub as a coprocessor. Examples of coprocessors used for handling sensor integration in mobile devices include the Apple M7 and M8 motion coprocessors, the Qualcomm Snapdragon Sensor Core and Qualcomm Hexagon, and the Holographic Processing Unit for the Microsoft HoloLens.
In 2012, Intel announced the Intel Xeon Phi coprocessor.
, various companies are developing coprocessors aimed at accelerating artificial neural networks for vision and other cognitive tasks (e.g. vision processing units, TrueNorth, and Zeroth), and as of 2018, such AI chips are in smartphones such as from Apple, and several Android phone vendors.
Other coprocessors
The MIPS architecture supports up to four coprocessor units, used for memory management, floating-point arithmetic, and two undefined coprocessors for other tasks such as graphics accelerators.
Using FPGA (field-programmable gate arrays), custom coprocessors can be created for acceleration of particular processing tasks such as digital signal processing (e.g. Zynq, combines ARM cores with FPGA on a single die).
TLS/SSL accelerators, used on servers; such accelerators used to be cards, but in modern times are instructions for crypto in mainstream CPUs.
Some multi-core chips can be programmed so that one of their processors is the primary processor, and the other processors are supporting coprocessors.
China's Matrix 2000 128 core PCI-e coprocessor is a proprietary accelerator that requires a CPU to run it, and has been employed in an upgrade of the 17,792 node Tianhe-2 supercomputer (2 Intel Knights Bridge+ 2 Matrix 2000 each), now dubbed 2A, roughly doubling its speed at 95 petaflops, exceeding the world's fastest supercomputer.
A range of coprocessors were available for Acorn BBC Micro computers. Rather than special-purpose graphics or arithmetic devices, these were general-purpose CPUs (such as 8086, Zilog Z80, or 6502) to which particular types of task were assigned by the operating system, off-loading them from the computer's main CPU and resulting in acceleration. In addition, a BBC Micro fitted with a coprocessor was able to run machine code software designed for other systems, such as CP/M and DOS which are written for 8086 processors.
Trends
Over time CPUs have tended to grow to absorb the functionality of the most popular coprocessors. FPUs are now considered an integral part of a processors' main pipeline; SIMD units gave multimedia its acceleration, taking over the role of various DSP accelerator cards; and even GPUs have become integrated on CPU dies. Nonetheless, specialized units remain popular away from desktop machines, and for additional power, and allow continued evolution independently of the main processor product lines.
See also
Multiprocessing, the use of two or more CPUs within a single computer system
Torrenza, an initiative to implement coprocessor support for AMD processors
OpenCL framework for writing programs that execute across heterogeneous platforms
Asymmetric multiprocessing
AI accelerator
References
Central processing unit
Heterogeneous computing
OpenCL compute devices |
4859198 | https://en.wikipedia.org/wiki/C4%20Engine | C4 Engine | The C4 Engine is a proprietary computer game engine developed by Terathon Software that is used to create 3D games and other types of interactive virtual simulations for PlayStation 5, PlayStation 4, PlayStation 3, Windows (XP and later), Mac OS X (versions 10.9 and later), Linux, and iOS.
Development history
Development of the C4 Engine is led by computer graphics author Eric Lengyel, who is also the founder of Terathon Software. Although in development sporadically for several years beforehand, the engine was first made available under a commercial license in May, 2005. Due to changing market conditions, the C4 Engine was retired in 2015, but an announcement has been made that it is returning in 2021.
Capabilities and features
The architecture of the C4 Engine is that of a layered collection of software components, in which the lowest layers interact with the computer hardware and operating system, and the higher layers provide platform-independent services to the game code. While a considerable portion of the engine is dedicated to 3D graphics, there are also large components dedicated to functionality pertaining to audio, networking, physics, input devices, and scripting. Documentation for the engine is available online through a set of API web pages and a wiki.
Graphics
The C4 Engine is based on the OpenGL library on Windows, Mac, Linux, and iOS platforms, and it uses a one-pass-per-light forward rendering model. The engine is capable of rendering with several different types of light sources and shadowing methods. The primary method for rendering dynamic shadows is shadow mapping, and a variant of cascaded shadow mapping is used for very large outdoor scenes.
Shaders are created in C4 using one of two available methods, both of which isolate the user from the shader code required by the underlying graphics library. Simple shaders can be created by specifying a set of material attributes such as a diffuse reflection color, a specular reflection color, and a group of texture maps. The engine internally generates the necessary shader code for each combination of material and light type that it encounters when rendering a scene. Material attributes can be used to produce effects such as normal mapping, parallax mapping, horizon mapping, and bumpy reflections or refractions.
C4 also includes a graphical Shader Editor that allows complex custom materials to be created using a large set of predefined operations. This method of designing materials enables greater creative freedom and functionality for expert users, but requires somewhat more work by the user. Materials created using the standard material attributes can be converted to custom shaders to serve as a starting point in the Shader Editor.
The terrain capabilities of the C4 Engine are based on a voxel technology, allowing full 3D sculpting to produce features such as overhangs, arches, and truly vertical cliffs that would not be possible under a conventional height-based terrain system. Triangle meshes are generated from voxel data using the Marching cubes algorithm, and seamless multiresolution level of detail is made possible by using the Transvoxel algorithm to stitch together regions of differing resolutions.
The engine is capable of rendering a large variety of special effects, including particle systems, procedural fire, electrical effects, volumetric fog, and weather phenomena. During a post-processing stage, the engine can also apply full-scene cinematic motion blur to the final image using a technique based on a velocity buffer, as well as glow and distortion effects. The engine does not provide the capability to design custom post-processing effects.
Audio
The C4 Engine can play sounds stored in the WAV format using 16-bit mono or stereo sampling, and audio data can be played from memory or streamed from disk. The engine plays sounds using a custom mixer that provides capabilities such as frequency shifting, Doppler effect, reverberation, and atmospheric absorption.
Networking
Multiplayer gameplay in C4 is supported by a two-layer messaging system that uses the UDP protocol to communicate among different computers connected to a game.
Physics
The C4 Engine has a native physics engine which can be used or allows the option for implementing a 3rd party solution instead.
Tools
The C4 Engine ships with basic game development tools required to make a modern game. Each tool is packaged as a plugin module that exists separately from the engine itself. Many tools make use of the comprehensive graphical user interface system provided by the engine so that a consistent interface is presented to the user across multiple platforms.
World Editor
The World Editor tool is a 3D content creation application that is typically used to create game environments for use with the C4 Engine. It provides a large set of drawing and manipulation capabilities that are used to construct world geometry as well as many game necessities such as lights, sounds, triggers, and special effects.
The World Editor can import scene information through the OpenGEX and COLLADA formats. This enables the use of content from a large number of digital content creation programs such as Autodesk Maya or 3D Studio Max.
Script editor
The World Editor tool includes a graphical script editor designed to be accessible to artists and level designers as well as programmers. The script editor allows the user to place various "methods" into a directed graph connected by "fibers" representing action dependencies and the order of execution. Scripts support loops through the creation of cycles in the graph structure, and conditional execution is supported by marking fibers to be followed or not followed based on the result value output by the methods at which they start.
The engine ships with several standard script methods that perform simple actions such as enabling or disabling a scene node (for example, to turn a light on or off) and more complex actions such as evaluating an arbitrary mathematical expression. New script methods can be defined by the game code, and they appear in the script editor as custom actions that can be used by a level designer.
Panel editor
The World Editor tool includes a sub-editor called the "panel editor" providing for the creation of 2D interface panels that can be placed inside a 3D world. The panel editor lets the user place various types of widgets such as text and images in a panel effect node that is rendered as part of the scene. Panels can also include a special camera widget that displays the scene that is visible to a camera placed anywhere else in the same world.
Interface panels are both dynamic and interactive. The engine provides an extensible set of "mutators" that can be applied to individual panel items to induce various forms of animation such as scrolling, rotation, or color change. A character in a game can interact with a panel by approaching it and clicking the mouse inside interactive items. Script can be attached to such items, causing a sequence of actions to occur when the player activates them.
Licensing
The C4 Engine is licensed for $100 per person, and this includes all future updates at no additional cost.
Academics
The C4 Engine has been licensed by many universities in connection with games-oriented software engineering curricula or for special research projects. These universities include MIT, Georgia Tech, Worcester Polytechnic Institute (WPI), McMaster University, and the University of Kempten. Students in some of these programs are required to create their own games as part of a course using the C4 Engine, and many of these projects have gone on to be entered in the Independent Games Festival student competition.
One particular university research project involved the TactaVest technology developed at WPI, and their use of the C4 Engine was featured in the Discovery Channel Canada television show Daily Planet airing on May 26, 2006.
Games using C4
Games that use the C4 Engine include:
Fat Princess Adventures for PlayStation 4
World of Subways
City Bus Simulator 2010
Lego Wolf3D
Quest of Persia: Lotfali Khan Zand
The 31st
Utility Vehicle Simulator 2012
Bridge! The Construction Game
Rolling
Wingball
Ludicrous
Gremlin Invasion
Gremlin Invasion: Survivor
1 Carnaval De Distorções
Tauchfahrt zur Titanic
Bounce!
The Visible Dark
World Hunter
Rabbit
GreySoul
References
External links
C4 Engine on Mod DB
C4 Engine on DevMaster
Game engines for Linux
IPhone video game engines
MacOS programming tools
PlayStation 3 software
PlayStation 4 software
Video game development software
Video game development software for Linux
Video game engines |
57985849 | https://en.wikipedia.org/wiki/Monorepo | Monorepo | In version control systems, a monorepo ("mono" meaning 'single' and "repo" being short for 'repository') is a software development strategy where code for many projects is stored in the same repository. , this software engineering practice was over two decades old, sometimes known as a 'shared codebase'. The word 'monorepo' has surpassed this phrase in popularity, and foregrounds the mechanism behind dynamic sharing. Central to the current interest is a criticism of the monolithic application as a practice in large organizations.
Google, Facebook, Microsoft, Uber, Airbnb, and Twitter all employ very large monorepos with varying strategies to scale build systems and version control software with a large volume of code and daily changes.
Advantages
There are a number of potential advantages to a monorepo over individual repositories:
Ease of code reuse – Similar functionality or communication protocols can be abstracted into shared libraries and directly included by projects, without the need of a dependency package manager.
Simplified dependency management – In a multiple repository environment where multiple projects depend on a third-party dependency, that dependency might be downloaded or built multiple times. In a monorepo the build can be easily optimized, as referenced dependencies all exist in the same codebase.
Atomic commits – When projects that work together are contained in separate repositories, releases need to sync which versions of one project work with the other. And in large enough projects, managing compatible versions between dependencies can become dependency hell. In a monorepo this problem can be negated, since developers may change multiple projects atomically.
Large-scale code refactoring – Since developers have access to the entire project, refactors can ensure that every piece of the project continues to function after a refactor.
Collaboration across teams – In a monorepo that uses source dependencies (dependencies that are compiled from source), teams can improve projects being worked on by other teams. This leads to flexible code ownership.
Limitations and disadvantages
Loss of version information – Although not required, some monorepo builds use one version number across all projects in the repository. This leads to a loss of per-project semantic versioning.
Lack of per-project access control – With split repositories, access to a repository can be granted based upon need. A monorepo allows read access to all software in the project, possibly presenting new security issues. Note that there are versioning systems in which this limitation is not an issue. For example, when Subversion is used, it's possible to download any part of the repo (even a single directory), and path-based authorization can be used to restrict access to certain parts of a repository.
More storage needed by default – With split repositories, you fetch only the project you are interested in by default. With a monorepo, you check out all projects by default. This can take up a significant amount of storage space. While all versioning systems have a mechanism to do a partial checkout, doing so defeats some of the advantages of a monorepo.
Scalability challenges
Companies with large projects have come across hurdles with monorepos, specifically concerning build tools and version control systems. Google's monorepo, speculated to be the largest in the world, meets the classification of an ultra-large-scale system and must handle tens of thousands of contributions every day in a repository over 80 terabytes large.
Scaling version control software
Companies using or switching to existing version control software found that software could not efficiently handle the amount of data required for a large monorepo. Facebook and Microsoft chose to contribute to or fork existing version control software Mercurial and Git respectively, while Google eventually created their own version control system.
For more than ten years, Google had relied on Perforce hosted on a single machine. In 2005 Google's build servers could get locked up to 10 minutes at a time. Google improved this to 30 seconds–1 minute in 2010. Due to scaling issues, Google eventually developed its own in-house distributed version control system dubbed Piper.
Facebook ran into performance issues with the version control system Mercurial and made upstream contributions to the client, and in January 2014 made it faster than a competing solution in Git.
In May 2017 Microsoft announced that virtually all of its Windows engineers use a Git monorepo. In the transition, Microsoft made substantial upstream contributions to the Git client to remove unnecessary file access and improve handling of large files with Virtual File System for Git.
Scaling build software
Few build tools work well in a monorepo, and flows where builds and continuous integration testing of the entire repository are performed upon check-in will cause performance problems. Directed graph builds systems like Buck, Bazel, Pants, and Please solve this by compartmentalizing builds and tests to the active area of development.
Twitter began development of Pants in 2011, as both Facebook's Buck and Google's Bazel were closed-source at the time. Twitter open-sourced Pants in 2012 under the Apache 2.0 License.
Please, a Go-based build system, was developed in 2016 by Thought Machine who was also inspired by Google's Bazel and dissatisfied with Facebook's Buck.
References
Version control
Software development process |
66185637 | https://en.wikipedia.org/wiki/History%20of%20the%20United%20States%20Space%20Force | History of the United States Space Force | While the United States Space Force gained its independence on 20 December 2019, the history of the United States Space Force can be traced back to the beginnings of the military space program following the conclusion of the Second World War in 1945. Early military space development was begun within the United States Army Air Forces by General Henry H. Arnold, who identified space as a crucial military arena decades before the first spaceflight. Gaining its independence from the Army on 18 September 1947, the United States Air Force began development of military space and ballistic missile programs, while also competing with the United States Army and United States Navy for the space mission.
In 1954, the Air Force created its first space organization, the Western Development Division, under the leadership of General Bernard Schriever. The Western Development Division and its successor organization, the Air Force Ballistic Missile Division, were instrumental in developing the first United States military launch vehicles and spacecraft, competing predominantly with the Army Ballistic Missile Agency under the leadership of General John Bruce Medaris and former German scientist Wernher von Braun. The launch of Sputnik 1 spurred a massive reorganization of military space and the 1958 establishment of the Advanced Research Projects Agency was a short-lived effort to centralized management of military space, with some fearing it would become a military service for space, with authorities being returned to the services in 1959. The establishment of NASA in 1958, however, completely decimated the Army Ballistic Missile Agency, resulting in the Air Force Ballistic Missile Division serving as the primary military space organization. In 1961, the Air Force was designated as the Department of Defense's executive agent for space and Air Research and Development Command was reorganized into Air Force Systems Command, with the Air Force Ballistic Missile Division being replaced by the Space Systems Division - the first Air Force division solely focused on space. In the 1960s, military space activities began to be operationalized, with Aerospace Defense Command taking control of missile warning and space surveillance on behalf of NORAD, Strategic Air Command assuming the weather reconnaissance mission, and Air Force Systems Command operating the first generations of communications satellites on behalf of the Defense Communications Agency. In 1967, the Space Systems Division and Ballistic Missiles Division were merged to form the Space and Missile Systems Organization, which began to develop the next generation of satellite communications, space-based missile warning, space launch vehicles and infrastructure, and the predecessor to the Global Positioning System. Space forces also saw their first employment in the Vietnam War, providing weather and communications support to ground and air forces.
The disjointed nature of military space forces across three military commands resulted in a reevaluation of space force organization within the Air Force. In 1979, the Space and Missile Systems Organization was split, forming the Space Division, and in 1980, Aerospace Defense Command was inactivated and its space forces transferred to Strategic Air Command. Resulting from internal and external pressures, including an effort by a congressman to rename the Air Force into the Aerospace Force and the possibility that President Reagan would direct the creation of a space force as a separate military branch, the Air Force directed the formation of Air Force Space Command in 1982. During the 1980s, Air Force Space Command absorbed the space missions of Strategic Air Command and the launch mission from Air Force Systems Command. Space forces provided space support during the Falklands War, United States invasion of Grenada, the 1986 United States bombing of Libya, Operation Earnest Will, and the United States invasion of Panama. The first major employment of space forces culminated in the Gulf War, where space forces proved so critical to the U.S.-led coalition, that it is sometimes referred to as the first space war.
Following the end of the Gulf War, the Air Force came under intense congressional scrutiny by seeking to artificially merge its air and space operations into a seamless aerospace continuum, without regard for the differences between space and air. The 2001 Space Commission criticized the Air Force for institutionalizing the primacy of aviation pilots over space officers in Air Force Space Command, for stifling the development of an independent space culture, and not paying sufficient budgetary attention to space. The Space Commission recommended the formation of a Space Corps within the Air Force between 2007 and 2011, with an independent Space Force to be created at a later date. The September 11 attacks derailed most progress in space development, resulting in the inactivation of United States Space Command and beginning a period of atrophy in military space. The only major change to occur was the transfer of the Space and Missile Systems Center from Air Force Materiel Command to Air Force Space Command. Following the inactivation of U.S. Space Command in 2002, Russia and China began developing sophisticated on-orbit capabilities and an array of counter-space weapons, with the 2007 Chinese anti-satellite missile test of particular concern as it created 2,841 high-velocity debris items, a larger amount of dangerous space junk than any other space event in history. On 29 August 2019, United States Space Command was reestablished as a geographic combatant command.
In response to advances by the Russian Space Forces and Chinese People's Liberation Army Strategic Support Force and frustrated by the Air Force's focus on fighters at the expense of space, Democratic Representative Jim Cooper and Republican Representative Mike Rogers introduced a bipartisan proposal to establish the United States Space Corps in 2017. While the Space Corps proposal failed in the senate, in 2019, the United States Space Force was signed into law, with Air Force Space Command becoming the United States Space Force and elevated to become the sixth military service in the United States Armed Forces.
Early military space development (1945–1957)
Early American military space activities began immediately after the conclusion of the Second World War. On 20 June 1944, MW 18014, a German Heer A-4 ballistic missile launched from the Peenemünde Army Research Center became the first artificial object to cross the Kármán line, the boundary between air and space. The A-4 ballistic missile, more commonly known as the V-2, was used by the German Wehrmacht to launch long ranged attacks on Allied Forces cities on the Western Front, however its designer, Wernher von Braun, had aspirations to use it as a space launch vehicle, defecting to the United States at the end of the war. A number of former German scientists, along with significant amounts of research material, were covertly moved to the United States as part of Operation Paperclip, jumpstarting the space program.
On 12 November 1945, General of the Army Henry H. Arnold, the commanding general of the United States Army Air Forces, sent a report to Secretary of War Robert P. Patterson emphasizing that the future United States Air Force would need to invest heavily in space and ballistic missile capabilities, rather than just focus on current aircraft. General Arnold received strong backing from Theodore von Kármán, the head of the Army Air Forces Scientific Advisory Group, and later United States Air Force Scientific Advisory Board. A 1946 study by Project RAND, directed by General Arnold and conducted by Louis Ridenour to determine the feasibility of a strategic reconnaissance satellite, identified nearly all future space mission areas, including intelligence, weather forecasting, satellite communications, and satellite navigation.
The first instance of interservice rivalries in military space development occurred in 1946, when the United States Navy Bureau of Aeronautics Electronics Division proposed testing the feasibility of an artificial satellite, however it was unable to get Navy funding to attempt a launch, instead requesting a joint program with the War Department Aeronautical Board. General Carl Spaatz, commanding general of the Army Air Forces and later the first chief of staff of the Air Force and Major General Curtis LeMay, then Deputy Chief of Staff for Research and Development, denied the Navy's request, as their position was that military space was an extension of strategic air power and thus an Air Force mission. By 1948, the Navy had suspended its satellite program, focusing instead on rocketry. On 18 September 1947, the Army Air Forces gained their independence as the United States Air Force. While the Air Force still held claim that military space was its domain, the new service prioritized conventional strategic bombers and fighter aircraft over long-term ballistic missile and space development.
Each of the three services continued to have independent ballistic missile and space development programs, with the United States Army Ordnance Department running Project Hermes out of White Sands, albeit with representatives from the Air Force Cambridge Research Center and Naval Research Laboratory. The Army saw rocketry and missiles as an extension of Artillery and on 24 February 1949 launched a RTV-G-4 Bumper rocket to an altitude of 393 kilometers. This set the stage for future Army space and missile developments under Wernher von Braun initially at Fort Bliss and after 1950 at Redstone Arsenal, who would later go on to develop the PGM-11 Redstone short-range ballistic missile, PGM-19 Jupiter medium-range ballistic missile, and Juno I and Juno II launch vehicles. United States Navy space research was run primarily through the civilian-led Johns Hopkins University Applied Physics Laboratory and Naval Research Laboratory, while the Army and Air Force organized their military space development under military programs. The Navy developed the Aerobee and Viking rockets. The United States Air Force centralized its missile program under Air Materiel Command, cutting its programs significantly due to the Truman drawdown after the Second World War. This eliminated the RTV-A-2 Hiroc, which was the Air Force's only long-range missile program.
Despite the Air Force's sweeping cuts to research and development, in 1949, General Hoyt Vandenberg, the second chief of staff of the Air Force, commissioned two reports on which came to the conclusion that abdicating its missile and space activities could result in the Army and Navy taking over those responsibilities. In response, on 23 January 1950, the Air Force established a deputy chief of staff of the Air Force for research and on 1 February 1950 activated Air Research and Development Command (ARDC). Air Research and Development Command absorbed Air Materiel Command's engineering division and became responsible for Air Force missile and space programs. The outbreak of the Korean War led to the Air Force regaining a significant amount of funding, and in January 1951, ARDC began development on the Convair SM-65 Atlas intercontinental ballistic missile, however ballistic missiles had a number of skeptics on the Air Staff, which led to reduced funding and slowed development. In April 1951, Project RAND released two studies on military satellite development, with one titled "Utility of a Satellite Vehicle for Reconnaissance" and the other "Inquiring into the Feasibility of Weather Reconnaissance from a Satellite Vehicle." The reports were enthusiastically received at Air Research and Development Command, which started a number of satellite design programs. In late 1953, ARDC assigned the satellite program the designation of Weapons Systems 117L (WS-117L), also known as the Advanced Reconnaissance System (ARS), beginning development at the Wright Air Development Center.
In response to the cautious approach of Air Research and Development Command and delaying tactics by the Air Staff, assistant secretary of the Air Force for research and development Trevor Gardner convened the Strategic Missiles Evaluation Committee led by John von Neumann to accelerate ballistic missile development. The findings of the von Neumann Committee and a parallel RAND study resulted in the establishment of the Western Development Division (WDD) under Brigadier General Bernard Schriever, a protégé of General of the Air Force Hap Arnold, on 1 July 1954. The Western Development Division, organized under Air Research and Development Command, was given total responsibility for all ballistic missile development.
The Western Development Division pioneered the use of parallel development, increasing the cost of the program, but ensuring redundancies to speed development times. Its primary program was the Convair SM-65 Atlas ICBM, developing the Martin Marietta HGM-25A Titan I ICBM as a backup, in case of the failure of the Atlas. Ultimately both missiles were put into service. On 10 October 1955, responsibility for the development of military satellites, to include the Advanced Reconnaissance System, was transferred from the Wright Air Development Center to the Western Development Division.
In August 1954, Congress authorized the government to begin development of a satellite to be launched for the International Geophysical Year. Each of the military services sought to compete to launch a satellite from their service for the competition, however the Department of Defense directed that it not detract from the Air Force Western Development Division's ballistic missile development program. The initially IGY scientific satellite was intended to establish the legal doctrine of "freedom of space," enabling spacecraft to fly over any country. The Army Ordnance Corps and Office of Naval Research jointly proposed Project Orbiter, which was led by Army Major General John Bruce Medaris and Army scientist Wernher von Braun. The Army was responsible for developing the booster, based on the PGM-19 Jupiter, while the Navy was responsible for the satellite, tracking facilities, and data analysis. The Naval Research Laboratory, however, proposed the single-service Project Vanguard, developing the Vanguard rocket and satellite, along with the Minitrack satellite tracking network. The Air Force Western Development Division initially declined to participate, focusing on military space programs rather than scientific endeavors, but was directed by the Department of Defense to put forward a proposal — an SM-65C Atlas booster with an Aerobee-Hi space probe. Ultimately, the Defense Department selected the Navy's Project Vanguard, and although it though that the Western Development Divisions's proposal showed great promise, it did not want to interfere with the development of the Atlas ICBM.
On 1 August 1957, the Western Development Division was redesignated as the Air Force Ballistic Missile Division (AFBMD). Two months later, on 4 October 1957, the Soviet Union beat the United States into space, launching Sputnik 1 from the Baikonur Cosmodrome. The launch of Sputnik greatly embarrassed the United States, which had the year prior prohibited any government officials to speak publicly about spaceflight. In February 1957, General Schriever, the senior space officer in the Air Force, was directed by the secretary of defense to not mention "space" in any of his speeches, after publicly discussing the importance of studying military offensive functions in space and declaring that the time is ripe for the Air Force to move into space. Immediately after Sputnik 1's launch, the gag order was rescinded.
Post–Sputnik crisis and organizational reforms (1957–1961)
In the aftermath of the launch of Sputnik 1, President Dwight D. Eisenhower implemented massive reforms in the civil and military space programs. The Soviet Union launched Sputnik 2 shortly after on 3 November 1957, with the Soviet space dog Laika on board. On 8 November 1957, the Department of Defense authorized the Army Ballistic Missile Agency to begin preparations to launch Project Orbiter's Explorer 1 on a Juno I rocket in case that the Navy's Project Vanguard were to fail. On 31 January 1958, the Army launched Explorer 1 from the Cape Canaveral Air Force Station Launch Complex 29, becoming the first American satellite and the third satellite to orbit the Earth.
Establishment of the Advanced Research Projects Agency
The Air Force used the launch of Sputnik 1 to argue that the entire national space program, both civil and military, should be organized under it. This was, in part, spurred by concern that congressional representatives were favoring the Army's space program, led by von Braun. In response, the Air Force led a public campaign to emphasize that space was a natural extension of its mission, coining the term "aerospace" to describe a single continuous sphere of operations from the Earth's atmosphere to outer space. The Air Force attempted to establish the Department of Astronautics on the Air Staff, announcing the decision on 10 December 1957, however Secretary of Defense Neil H. McElroy prohibited its creation, instead announcing on 20 December that the Defense Department would establish the Advanced Research Projects Agency (ARPA) to unify the space programs of the Army, Navy and Air Force. Organizationally, the Air Force would represent space on the Air Staff through the assistant chief of staff of the Air Force for guided missiles, finally being permitted to create the Directorate of Advanced Technology to handle space responsibilities after the National Aeronautics and Space Act was passed by congress on 29 July 1958.
On 24 January 1958, the Air Force Astronautics Development Program was submitted to the Defense Department, articulating the five major systems that the Air Force wanted to pursue in space: ballistic test and related systems, manned hypersonic research (to include the North American X-15), the Boeing X-20 Dyna-Soar orbital glider (to include reconnaissance, interceptor, and bomber variants), the WS-117L Advanced Reconnaissance System (to include a crewed military strategic space station), and the Lunex Project to put an Air Force base on the Moon.
The Advanced Research Projects Agency (ARPA) was officially established on 7 February 1958, taking over service-control of space programs, with the intent to reduce interservice rivalry, raise the profile of space, and reduce unneeded redundancy. ARPA did not operate its own laboratories or personnel, but rather directed programs, assigning them to the different service components to perform the actual development. Projects transferred from the services to ARPA included Operation Argus, a Navy exoatmospheric nuclear detonation testing program, the Navy's Project Vanguard and other satellite and outer space programs, the High Performance Solid Propellants program, the Navy's Minitrack doppler fence, Army and Air Force ballistic missile defense projects, studies of the effects of space weapons employment on military electronic systems, Project Orion, an Air Force program on nuclear bomb-propelled space vehicles, and the WS-117L Advanced Reconnaissance System, which it split into three different programs: the Sentry reconnaissance component, the Missile Defense Alarm System (MIDAS) infrared sensor component, and the Discoverer program, which was a cover for the joint Air Force-Central Intelligence Agency Corona reconnaissance satellite. ARPA redistributed the sounding rockets and ground instrumentation for Project Argus to Air Force Special Weapons Command and the Air Force Cambridge Research Center, weapons systems to control hostile satellites, Project Orion, studies of the effects of space weapons employment on military electronic systems, the WS-117L programs, high energy and liquid hydrogen-liquid oxygen propellent, reentry studies, and Project SCORE (previously an Army satellite communications program) to Air Research and Development Command, the Pioneer lunar probe program to the Air Force Ballistic Missile Division, and the Saturn I, meteorological satellite, and inflatable sphere program to the Army Ordnance Missile Command.
ARPA's techniques were highly unsettling to leaders across all of the military services, as they directly dealt with subordinated service commands, by passing the traditional chain of command. The agency had a complicated relationship with the Air Force, which sought to be the sole service for military space, however, ARPA consistently awarded it 80% of all military space programs and championed its program of putting a military man in space, awarding it development responsibility for crewed military spaceflight in February 1958. The Man in Space Soonest program was ultimately geared towards putting military astronauts on the Moon and returning them to Earth. The Army and Navy, without the sponsorship of ARPA, still held ambitions for crewed military spaceflight, with the Army Ballistic Missile Agency proposing Project Adam where an astronaut would be launched on a sub-orbital trajectory on a Juno II rocket, however it received no support, being liked to "about
the same technical value as the circus stunt of shooting a young lady from a cannon" by NACA director Hugh Latimer Dryden and was outright rejected by the Defense Department. The Navy proposed Manned Earth Reconnaissance I, however it was considered technically infeasible. The Air Force decided to cooperate with ARPA in order to gain development responsibility, and ultimately operational responsibility for all military space programs. ARPA was the sole national space agency for much of 1958, and carried out presidentially directed civil space mission, such as the Pioneer program of lunar probes, with military resources such as Air Force Thor-Able and Army Juno II rockets.
The National Aeronautics and Space Administration (NASA) was established on 29 July 1958, and was directed by President Eisenhower to become the United States' civil space agency. Eisenhower always intended to have parallel civil and military space programs, only temporarily putting civil space programs under ARPA. NASA was primarily formed from the National Advisory Committee for Aeronautics (NACA) and began operations on 1 October 1958. It 7,000 NACA personnel and the Langley Research Center, Ames Research Center, Lewis Research Center (now John H. Glenn Research Center), the High-Speed Flight Station (now Armstrong Flight Research Center), and the Wallops Flight Facility from the aeronautical research agency. The bulk of NASA's space program, however, was absorbed from the Defense Department, specifically ARPA and the military services. The Navy' space program, mostly run for civil research, was given up willingly, with NASA absorbing Project Vanguard, including 400 Naval Research Laboratory personnel and its Minitrack space tracking network. The Air Force Ballistic Missile Division transferred its Man in Space Soonest program, becoming the core of Project Mercury, and the Pioneer program lunar probe missions. ARPA also transferred over responsibility for special engines, special components for space systems, Project Argus, satellite tracking and monitoring systems, satellite communication relay, metrological reporting, navigation aid systems, and the NOTS Program to get images of the dark side of the Moon. The Army's space program, however, was considered by NASA Administrator T. Keith Glennan, to be the most valuable source of space resources and was decimated by the transfer. Major General Medaris, commander of the Army Ordnance Missile Command, very publicly fought the transfer, but nearly the entirety of the Army Ballistic Missile Agency, to include von Braun's Saturn I team at Redstone Arsenal (which would become the Marshall Space Flight Center) and the Jet Propulsion Laboratory were transferred to NASA, completely crushing any hope of an independent Army space program.
Aside from NASA, the biggest winner of the transfer was the Air Force Ballistic Missile Division, having seen its rivals in the Army and Navy completely decimated by it and only having to give up an independent Air Force crewed military program and scientific lunar probes. AFBMD leadership quickly perceived that the best way to enhance their dominance in military space was to cooperate with NASA and make themselves invaluable partners to the new organization, specifically in providing space launch, facilities, and space launch vehicles. ARPA's power as an independent agency took a significant hit when Congress passed the Defense Reorganization Act of 1958, creating the Director of Defense Research and Engineering (DDR&E), which would grant the services more authority in space than ARPA did.
Gaining the military space mission
In February 1959, the deputy chief of staff of the Air Force for plans performed an analysis that suggested that its weakness in space organization, operations, and research and development all stemmed from its early failure to develop a coordinated space program and that to become the dominant service in space it should demonstrate successful stewardship and push forward to create its own space program, rather than just request missions and roles from ARPA, while at the same time improving service relationships with ARPA and NASA. The Air Force executed an intensive lobbying campaign within Congress, the Defense Department, and NASA, relying heavily on its rational that it was an Aerospace service and that the missions it intended to perform in space were a logical extension of its atmospheric responsibilities. In Spring 1959, the Air Force released twelve major military uses of space:
Military reconnaissance satellites utilizing, optical, infrared, and electromagnetic instrumentation
Satellites for weather observation
Military communications satellites
Satellites for electronic countermeasures
Satellite aids for navigation
Manned maintenance and resupply outer space vehicles
Manned defensive outer space vehicles and bombardment satellites
Manned lunar station
Satellite defense system
Manned detection, warning, and reconnaissance space vehicle
Manned bombardment space vehicle or space base
Target drone satellite
Five of these missions (photographic reconnaissance, electronic reconnaissance, infrared reconnaissance, mapping and charting, and space environmental forecasting and observing) had received approval as Air Force General Operational Requirements and represented missions previously identified and analyzed by RAND. The Air Staff released an analysis on constraints that prohibited the Air Force from implementing its aerospace force policy, identifying NASA's responsibility for the scientific space area and ARPA's responsibility for the military space area as key issues. Specifically the Air Staff faulted ARPA for assigning system development to a service on the basis of existing capability, but without regard for existing or likely space mission and support roles. Rather, the Air Staff felt that ARPA should focus on policy decisions and leave project engineering to the lowest level at the Air Force Ballistic Missile Division. It also argued that the Air Force should be responsible for providing common interests items, such as space launch boosters and satellites, to NASA, enabling the civil agency to focus its budget and efforts entirely on scientific endeavors. This analysis was supported by General Schriever and the AFBMD, who found his command becoming overburdened with ARPA programs and NASA requirements. In April 1959, General Schriever testified before congress that that Air Force's responsibilities for strategic offensive and defensive missions would be, in part, conducted by ballistic missiles, satellites, and spacecraft. Furthermore, he testified that the Advanced Research Projects Agency should be dissolved, that the Director of Defense Research and Engineering should assume the role of providing policy guidance and service responsibility, and that space research and development control be returned to the military services.
The Air Force's arguments for autonomy from ARPA were bolstered by the success of major Air Force Ballistic Missile Division programs, including the former elements of the WS-117L Advanced Reconnaissance System. The Samos reconnaissance satellite, formerly known as Sentry, was to be launched using an Atlas-Agena and would be operated by Strategic Air Command to collect photographic and electromagnetic reconnaissance data. The Missile Defense Alarm System (MIDAS), also launched by Atlas-Agena and using infrared sensors, was intended be under the operational command of the United States—Canadian North American Air Defense Command (NORAD) and the U.S.—only Continental Air Defense Command, operated by the Air Force's Air Defense Command to provide early warning of a Soviet nuclear attack. The Air Force also continued development of the joint Air Force-Central Intelligence Agency Corona program, under the public name of Project Discoverer, which used Thor-Agena boosters launched from Vandenberg Air Force Base. The Air Force Ballistic Missile Division also provided launch support to the other services, launching the Navy's Transit navigation satellites, designed to support its fleet ballistic missile submarines, and the Army's Notus communications satellite. AFBMD continued its development of boosters, including the Thor, Atlas, and Titan space launch vehicles.
One of ARPA's most consequential programs was the Space Detection and Tracking System (SPADATS), initially started under the name Project Sheppard, to integrate the space surveillance systems of the various services. Hurried due to the launch of Sputnik 1, the Air Force contributed Spacetrack (initially Project Harvest Moon), which provided the Interim National Space Surveillance Control Center at Hanscom Field, bringing together Lincoln Laboratory's Millstone Hill Radar, the Stanford Research Institute, and an Air Research and Development Command test radar at Laredo Air Force Station, and the Smithsonian Astrophysical Observatory's Baker-Nunn camera, with the Air Force responsible for devising the development plan for future operational space surveillance systems. The Navy was responsible for operating the Navy Space Surveillance System (NAVSPASUR) from Naval Support Activity Dahlgren, and the Army was assigned to develop Doploc, a doppler radar network, however the Army dropped out of the project. There was strong disagreement between the Air Force and Navy over who would operate the system, with the Navy preferring to operate it as a separate system, while the Air Force wanted to have it under the operational command of NORAD and Continental Air Defense Command, with day to day operations handled by Air Defense Command. By mid-1959, the topic became so contentious that the Joint Chiefs of Staff had to decide the issue, when it became part of a larger discussion on service roles and missions.
The Air Force Ballistic Missile Division continued to provide significant support for NASA, constructing infrastructure for the space agency at Patrick Air Force Base and Cape Canaveral Air Force Station, as well as providing Thor-Able boosters and launch support to the Pioneer program of lunar probes and Thor-Able and Thor-Delta boosters and launch support to the Television Infrared Observation Satellite (Tiros) weather observation satellites. The Air Force Ballistic Missile Division also supported the development of the Centarur high-energy stage, which it intended to use to support the cancelled Advent communications satellite project. Among the most important support the Air Force Ballistic Missile Division provided NASA was for Project Mercury, its first human spaceflight program. AFBMD provided Atlas LV-3B launch vehicles for orbital flights, launch support, and aerospace medical officers. Much of its aerospace medical knowledge was gained from medical personnel who had served with the German Luftwaffe during the Second World War and had defected to the United States. The world's first Department of Space Medicine was established at the United States Air Force School of Aviation Medicine (later renamed to School of Aerospace Medicine) in February 1949 by Hubertus Strughold, who coined the term space medicine. Air Force medical personnel would go on to conduct a variety of experiments on weightlessness.
Although NASA was responsible for most crewed spaceflight, the Air Force Ballistic Missile Division continued with the development of the Boeing X-20 Dyna-Soar orbital glider. The X-20 evolved from the rocket plane tests of the 1950s, such as the Bell X-1 and North American X-15. It was created by merging to together the Rocket Bomber, Brass Bell high altitude reconnaissance system, and Hywards boost-glide vehicle on 30 April 1957. It was intended to be the first true spaceplane, replacing atmospheric bombers and reconnaissance aircraft. It was considered using the Saturn I booster with the X-20, as was proposed by von Braun a number of times, but this was rejected due to concerns that that project would be transferred to NASA, as was the Man in Space Soonest program.
Interservice rivalries over space persisted in 1959, with Army Major General Medaris testifying before Congress that the Air Force Ballistic Missile Agency and General Schriever had a long history of noncooperation with the Army Ballistic Missile Agency, to which General Schriever submitted a long rebuttal, but the charge was not withdrawn by the Army. In April 1959, Admiral Arleigh Burke, chief of naval operations, made a bold bid for a major share of the space enterprise for the Navy, proposing the establishment of a joint Defense Astronautical Agency to the Joint Chiefs of Staff. This was supported by chief of staff of the Army General Maxwell D. Taylor, under the premise that space transcended the interests of any one service, however it was opposed by chief of staff of the Air Force General Thomas D. White, who been stating that space was the domain of the Air Force under the idea of aerospace. This Army-Navy effort for a Defense Astronautical Agency compelled General Schriever to push for the Air Force to acquire as much of the military space mission as it could, stating to the secretary of defense that the Air Force had been operating in aerospace since its beginning in mission areas of strategic attack, defense against attack, and supporting systems that enhanced both and that the best way to organize these forces would be to unify them under the Air Force. Schriever went on to state that Army and Navy requirements would be satisfied by the Air Force as the prime operating agency of the military satellite force, much as it was in the air. Service tensions were further strained by ARPA director Roy Johnson's proposal to establish a tri-service Mercury Task Force to support NASA, rather than have the Air Force be the sole supporting service. The Army and Navy supported a Defense Astronautical Agency and joint Mercury Task Force, while the Air Force argued they should be integrated into its already preexisting command structure. Ultimately the secretary of defense decided that a Defense Astronautical Agency was not needed at this time to provide operational control of all space forces, denied the request for the Mercury Task Force, instead appointing Air Force Major General Donald Norton Yates, commander of the Air Force Missile Test Center, to direct military support for NASA crewed missions, and assigned the Air Force responsibility for the development, production, and launching of space boosters. Satellite operations responsibilities would be assigned to services on a case-by-case basis, however, the Air Force received the majority of these responsibilities. The concept behind the Defense Astronautical Agency as a joint space command would be realized 25 years later as United States Space Command.
On 30 December 1959, the Advance Research Projects Agency's status as the sole controlling entity of military space programs was brought to a close as it was retasked as an operating research and development agency under the Director of Defense Research and Development. Responsibilities for military space programs were returned to the individual services, with ARPA only tasked with a few advanced programs.
On 1 May 1960, a Central Intelligence Agency Lockheed U-2 was shot down over the Soviet Union, limiting reconnaissance flights to the edges of the Soviet Union and sparking Congress to increase funding for space based reconnaissance such as Samos and MIDAS. On 10 June 1960, President Eisenhower directed secretary of defense Thomas S. Gates Jr. to reassess space-based intelligence requirements, concluding that Samos, the Corona program, and U-2 all represented national assets and that they should be organized under a civilian agency in the Defense Department, not a single military service. On 31 August 1960, secretary of the Air Force Dudley C. Sharp created the Office of Missile and Satellite Systems under the assistant secretary of the Air Force to coordinate Air Force, Central Intelligence Agency, Navy, and National Security Agency intelligence reconnaissance activities. On 6 September 1961, the Office of Missile and Satellite Systems became the National Reconnaissance Office, absorbing all military space reconnaissance programs, such as Samos and Corona. Only MIDAS and the Vela nuclear detonation detection satellites remained in the Air Force's satellite inventory.
On 3 June 1960, the Air Force created the Aerospace Corporation to provide it with technical space competency as a federally funded research and development center, adjacent to the Air Force Ballistic Missile Division headquarters in Inglewood, California. By the end of its first year, the Aerospace Corporation had 1,700 employees and was responsible for 12 space programs. Aerospace would grow to provide general systems engineering and technical direction for every Air Force and Space Force missile and space program.
Kennedy Administration and establishment of Air Force Systems Command
The election of John F. Kennedy to president of the United States put a renewed focus on the space program, both military and civil. President Kennedy appointed Jerome Wiesner to chair a committee to review the organization of military and civil space. The Wiesner Report criticize the fractionalized military space program, recommending that one agency or military service be made responsible for all military space, and stated that the Air Force was the logical choice, as it was already responsible for 90% of the support and resources for other space agencies and that the Air Force was the "principal resource for the development and operation of future space systems, except those of a purely scientific nature assigned by law to NASA." Shortly after taking office, secretary of defense Robert McNamara designated the Air Force as the executive agent for military space, assigning it responsibility for "research, development, test, and engineering of Department of Defense space development programs or projects," while still permitting each service to conduct preliminary research and asserted that operational assignment of each space system to a service would be done on a case-by-case basis.
In early 1961, deputy secretary of defense Roswell Gilpatric contacted chief of staff of the Air Force General White and promised the Air Force major responsibility for the space mission if he "put his house in order." Specifically, he was referring to the split responsibility for research and development under Air Research and Development Command and procurement under Air Materiel Command. Secretary Gilpatric's views were informed by General Schriever, now commander of Air Research and Development Command, who had told him that the Air Force could not handle the military space mission unless one command held responsibility for research and development, system testing, and acquisition of space systems. General Schreiver held these views for a number of years, however the issue gained a pressing urgency by 1960, as Air Research and Development Command's Air Force Ballistic Missile Division and Air Materiel Command's Ballistic Missiles Center competed for resources and management focus as the demand for both space and missile systems became more pressing. In September 1960, General White authorized General Schriever to begin reorganization, keeping his space programs in Los Angeles, while moving ballistic missile functions to Norton Air Force Base, however General Schriever felt it was insufficient and he was authorized to form a planning task force. Colonel Otto Glasser, later a lieutenant general, developed the reorganization plan resulting in the reorganization of Air Research and Development Command as Air Force Systems Command (AFSC) on 1 April 1961, giving the organization responsibility for all research, development and acquisition of aerospace and missile systems. On the same day Air Materiel Command was reorganized as Air Force Logistics Command, removing its production functions and responsible for maintenance and supply only. Lieutenant General Schriever, commander of Air Research and Development Command, was promoted to general and made the first commander of Air Force Systems Command.
Air Force Systems Command was organized into four subordinate divisions: the Aeronautical Systems Division (ASD), the Ballistic Missiles Division (BMD), the Electronics Systems Division (ESD), and the Space Systems Division (SSD). The Space Systems Division was established in Los Angeles, absorbing the space elements of the Air Force Ballistic Missile Division and Air Force Ballistic Missiles Center. The Ballistic Missile Division was established at Norton Air Force Base and absorbed the ballistic missile elements of the Air Force Ballistic Missile Division and Air Force Ballistic Missiles Center, as well as the Army Corps of Engineers Ballistic Missile Construction Office. In addition the Office of Aerospace Research was established on the Air Staff at the Pentagon.
The Space Systems Division quickly established itself, and on 20 March 1961 the Gardner Report was submitted to General Schreiver. In it, Trevor Gardner stated that the United States could not overtake the Soviet Union in space for three to five years without a significant increase in space investment by the Defense Department. As well, he stated that the line between military and civil space would need to be crossed in a comprehensive, lunar landing program that would land astronauts on the Moon between 1967 and 1970 and that such an effort would produce important technologies, industries, and lessons learned for both military and civil space programs.
On 12 April 1961, Soviet Air Forces cosmonaut Yuri Gagarin became the first human to enter space, launching on the spacecraft Vostok 1. This sparked Secretary MacNamara to direct Herbert York, director of defense research and development, and secretary of the Air Force Eugene M. Zuckert to assess the national space programs in terms of defense interests, specifically considering the findings of the Gardner Report. The task force study took place at the Space Systems Division and was led by Major General Joseph R. Holzapple, Air Force Systems Command assistant deputy commander for aerospace systems. The Holzapple Report, submitted to the secretary of defense on 1 May 1961, called for a NASA-led lunar landing imitative, with significant Air Force support.
Space forces in the Cold War (1961–1982)
The Air Force being designated the Defense Department executive agent for space and the creation of the Space Systems Division in 1961 solidified the service's status as the dominant military space power. In May 1961, acting in part on the Holzapple Report, the Kennedy Administration assigned NASA the responsibility for the lunar landing mission, however the Space Systems Division was expected to continue to provide personnel, launch vehicles, and ground support to the civil space agency.
In 1961, major space programs such as Samos and Spacetrack were achieving operational capability, while developmental programs such as MIDAS and the Project SAINT satellite inspector received additional funding. This was in large part due to President Kennedy's push for an integrated national space program, rather than creating strict silos between military space and NASA, and in Spring 1961, the Air Force was responsible for 90% of military space efforts.
Crewed military spaceflight programs and military support to NASA
Alarmed by the orbital flights of Soviet Air Forces cosmonauts Yuri Gagarin in Vostok 1 and Gherman Titov in Vostok 2, Air Force Systems Command redoubled its push for a crewed military space program, with chief of Staff of the Air Force General Curtis E. LeMay drawing parallels between airpower during the First World War and spacepower in the early-1960s. General LeMay remarked how the initial use of airplanes in the First World War moved from peaceful, chivalric, unarmed reconnaissance flights to combat efforts designed to deny the enemy air superiority, and that it would be naïve to believe that the same trends were not expected to be seen and prepared for in space. This view soon became prevailing within the defense establishment, and the Boeing X-20 Dyna-Soar's orbital flight program was accelerated, using the Titan IIIC launch vehicle rather than the Titan II GLV
On 21 September 1961, the Air Force's first formal space plan was completed, calling for an aggressive military space program. Specifically, it recommended continuing the Discoverer/Corona program, MIDAS, Samos, and Blue Scout research vehicle at their present pace, while accelerating efforts to develop orbital weapons, and an anti-satellite and anti-missile defense system. NASA and the Space Systems Division continued to closely cooperate on space launch, with the Space Systems Division developing the Titan IIIC space launch vehicle, which was capable of launching payloads up to 25,000 pounds into orbit. The Space Systems Division and NASA also closely cooperated on the Apollo Program, jointly selecting the launch location at Cape Canaveral. An agreement between NASA Administrator James E. Webb and Deputy Defense Secretary Roswell Gilpatric made NASA responsible for the costs of the lunar program, while the Space Systems Division would serve as the range manager. On 24 February 1962, the Department of Defense designated the Air Force as the executive agent for NASA support.
The Space Systems Division provided close support to NASA's Project Mercury, providing three of the Mercury Seven astronauts, Cape Canaveral Air Force Station Launch Complex 5 and Launch Complex 14, RM-90 Blue Scout II and Atlas LV-3B launch vehicles, and United States Air Force Pararescue recovery forces. The Space Systems Division was planning to provide similar support to Project Gemini and was supporting 14 NASA programs with 96 research and development officers attached. In April 1962, the position of deputy to the commander of Air Force Systems Command for Manned Space Flight was established at NASA Headquarters, consisting of personnel from the Army, Marine Corps, Navy, and Air Force.
On 11 June 1962, The New York Times broke the story about the SAINT program, creating a political firestorm by claiming that the Air Force was intent on weaponizing space. The public blowback was tremendous, resulting in greater scrutiny from the Defense Department and the White House on military space programs. In the 1962 Air Force Space Plan request, the only programs that received significant funding were the Military Orbital Development System (MODS) space station, Blue Gemini experimentation program, MIDAS, SAINT, the X-20 Dyna-Soar, and Titan III launch vehicle. The MODS experimental space station was to be launched on a Titan IIIC booster and the Blue Gemini program focused specifically on testing rendezvous, docking, and personnel transfer functions, however there were concerns that Blue Gemini could endanger the X-20 Dyna-Soar. Both MODS and Blue Gemini were cancelled by Defense Secretary McNamara, instead requiring the Space Systems Division to work through NASA's Project Gemini.
Project Gemini was managed by NASA, but had a joint Gemini Program Planning Board co-chaired by NASA's associate administrator and the assistant secretary of the Air Force for research and development. Like in Project Mercury, the Space Systems Division provided significant support, including nine of the sixteen Gemini astronauts, Cape Canaveral Air Force Station Launch Complex 19, Atlas-Agena and Titan II GLV launch vehicles, and United States Air Force Pararescue recovery forces.
Even before the Apollo Program started, in 1963 both NASA and the Space Systems Division were contemplating future space programs, with NASA focusing on the Apollo Applications Program to create a space station. Secretary McNamara continued to cut Space Systems Division programs, reducing funding for MIDAS and reducing SAINT to a definitions study, reorienting anti-satellite and missile defense systems on ground-based radars and missiles. In on 10 December 1963, Secretary McNamara authorized the Space Systems Division to begin development on the Manned Orbiting Laboratory (MOL), an orbital military reconnaissance space station launched on a Titan IIIM from Vandenberg Air Force Base Space Launch Complex 6, however at the cost of canceling the Boeing X-20 Dyna-Soar orbital fighter. In addition to scientific experimentations, MOL was to provide surveillance of the Soviet Union, naval reconnaissance while over water, and satellite inspection of non-U.S. spacecraft. It was approved for full scale development on 25 August 1965 by President Lyndon B. Johnson. On 10 June 1969, the Manned Orbiting Laboratory program was cancelled, due to the reliability of uncrewed space systems and its high cost.
General Schriever's retirement in 1966 also marked a change in Air Force space organization. His successor, General James Ferguson, reorganized Air Force Systems Command, reconsolidating the Ballistic Missile Division and Space Systems Division, in large part due to the BMD's lessened responsibilities, into the Space and Missile Systems Organization (SAMSO) on 1 July 1967. SAMSO, like the SSD before it, continued to provide support to the Apollo Program, providing seventeen of the thirty-two astronauts, Cape Canaveral Air Force Station Launch Complex 34, and United States Air Force Pararescue recovery forces.
Deployment of military satellite communications systems
The Second World War made apparent the need for military communications over longer range, with greater security, higher capacity, and improved reliability. The first satellite communications concept was offered by science fiction author Arthur C. Clarke in 1945, and immediately after the conclusion of the war the Army Signal Corps experimented with Earth–Moon–Earth communication through Project Diana, using the Moon and Venus as signal reflectors. The Navy also experimented with this method through the Communication Moon Relay, creating two-way voice communications between San Diego, Hawaii, and Washington.
In July 1958, the Advanced Research Projects Agency assigned the Army Signal Corps Project SCORE, the world's first communications satellite. On 18 December 1958, it was launched by an Air Force Ballistic Missile Division SM-65B Atlas, broadcasting a Christmas greeting from President Eisenhower in the very high frequency (VHF) band. In October 1960, the Army Signal Corps launched Courier 1B on an Air Force Ballistic Missile Division Thor-Ablestar and operated in the ultra high frequency (UHF) band. The Air Force Ballistic Missile Division attempted to produce an artificial ionosphere to bounce communications signals off of through Project West Ford, however it was rendered obsolete by advances in communications satellites. ARPA began planning for a truly strategic geosynchronous communication system in 1958, assigning the Air Force Ballistic Missile Division responsibility for the booster and spacecraft and the Army Signal Corps the communications element. Initially consisting of three repeater satellite programs, in September 1959 the secretary of defense transferred responsibility for communications satellite management from ARPA to the Army. In February 1960, the three programs were combined into Project Advent and assigned to the Army in September. However, the Army would not have operational responsibility for military satellite communications, as the Defense Department was unifying the strategic communications systems of the Army, Navy, and Air Force as part of the Defense Communications System, operated by the Defense Communications Agency, which was established on 12 May 1960. Project Advent was considered to be a very ambitious program, with the first tranche of satellites launched into 5,600 mile inclined orbits by AFBMD Atlas-Agena launch vehicles, with the second tranche launched into geostationary orbits by AFBMD Atlas-Centaurs. Given cost overruns and technological breakthroughs in smaller satellites, Project Advent was cancelled on 23 May 1962.
With Project Advent's failure, the Defense Department turned to the Air Force-aligned the Aerospace Corporation, which had been developing two alternatives. In Summer 1962, the Space Systems Division got authorization to proceed with the development of the Initial Defense Communication Satellite Program (IDCSP) to provided communications in the super high frequency (SHF) bandwidth. Unlike in Project Advent, the Space Systems Division would have complete control over the spacecraft and booster, with the Army Satellite Communications Agency only had authority over the ground segment, with the Defense Communications Agency responsible for unify the Army and Air Force Space Systems Division efforts. The Space Systems Division also received authorization to develop a second system, the Advanced Defense Communications Satellite Program (ADCSP), at the same time. IDCSP development proved difficult, intended to be launched on Atlas-Agena boosters, due to intensive studies by the Defense Department and Secretary McNamara question if the military needed to operate its own communications satellites, rather than lease bandwidth from the COMSAT corporation. On 15 July 1964, after failed negations with COMSAT and concerns about hosting military payloads on civil satellites that could be used by foreign countries, Secretary McNamara opted to push forward with the more secure and reliable military satellite communications system. The development of the Titan IIIC prompted a change from medium earth orbits to near-synchronous orbits for the IDCSP.
The IDSCP was originally intended to just be an experimentation program, but was so successful that it became an operational satellite constellation. The first seven satellites were launched on 16 June 1966 by the Space Systems Division Titan IIIC. The second launch, of eight IDSCP satellites, were launched on 26 August 1966, however a critical failure resulted in the loss of the launch vehicle and payloads. The third launch occurred on 18 January 1967, placing eight satellites into orbit, with a 1 July 1967 launch, the first by SAMSO, placing three more satellites into orbit. This launch also placed into orbit a number of test satellites, including the Navy's DODGE gravity gradient experiment, the DATS satellite, and Lincoln Experimental Satellite-5. The final eight satellites were launched into orbit on 13 June 1968, creating a constellation of 26 satellite when the Defense Communications Agency declared the IDCSP system operation al changed its name to the Initial Defense Satellite Communications System (IDSCS).
By mid-1968, 36 fixed and mobile ground terminals, the responsibility of the Army, completed the satellite communications system. Originally used for the Army Signal Corps' Project Advent and later co-opted for NASA's Syncom satellite program, two fixed AN/FSC-9 ground terminals, one located at Camp Roberts, California and the other at Fort Dix began relaying IDSCS satellite data. Mobile terminals consisted of seven AN/TSC-54 terminals, thirteen AN/MSC-46 terminals, and six ship-based terminals. Ground terminal locations included Colorado, West Germany, Ethiopia, Hawaii, Guam, Australia, South Korea, Okinawa, the Philippines, South Vietnam, and Thailand. In 1967 the Air Force Space and Missile Systems Organization demonstrated the capability of the IDSCP at the 21st Armed Forces Communications and Electronics Association convention in Washington D.C., connecting Secretary of the Air Force Harold Brown directly with the Seventh Air Force commander in South Vietnam, General William W. Momyer. The Initial Defense Communications System later became known as the Defense Satellite Communications System Phase I (DSCS I). DSCS I became known for its reliability, and by 1971, fifteen of the twenty-six initial satellites, intended purely as an experiment, remained operational. By mid-1976, three continued to function, several years after their intended shutoff date. the DSCS I constellation provided the Defense Communications Agency service for nearly 10 years and served as the basic design for the British Armed Forces' Skynet 1 satellites, launched by SAMSO Thor-Delta rockets in 1969, and NATO's satellite communications, also launched by a SAMSO Thor-Delta rocket in 1970.
Although DSCS I was superior to radio or cable communications, it remained limited in terms of channel capacity, user access, and overall coverage. Work began on the Defense Satellite Communications System Phase II (DSCS II) to satisfy the original intent of Project Advent and overcome these difficulties. Preliminary work began at the Space Systems Division in 1964 as the Advanced Defense Communications Satellite Program, with the Defense Communications Agency awarding six concept study contracts in 1965, with procurement authorized in June 1968. The Space and Missile Systems Organization planned for a constellation of four satellites in geosynchronous orbit (one over the Indian Ocean, one over the Eastern Pacific Ocean, one over the Western Pacific Ocean, and one over the Atlantic Ocean), with two spare spacecraft. The Defense Communications Agency maintained overall control of the program, with the Air Force Space and Missile Systems Organization responsible for the spacecraft, Titan IIIC boosters, and operations from the Air Force Satellite Control Facility, while the Army Signal Corps was responsible for the ground-segment.
DSCS II was a major step forward in communications satellite design, with both broad area and narrow beam coverage. The first two satellites were launched from a Titan IIIC on 2 November 1971, placing one over the Atlantic Ocean and one over the Pacific Ocean. After redesigning due to failures in the initial tranche, the second set of satellites was launched on 13 December 1973 and the constellation was declared operational by the Defense Communications Agency in February 1974 with only two satellites on orbit. The third launch on 20 May 1975 had an anomaly in the Titan IIIC's guidance system which resulted in the spacecraft reentering the atmosphere and the DSCS II satellites were lost. The constellation was later completed, and by the 1980s DSCS II provided strategic communications through 46 ground terminals, the Diplomatic Telecommunications Service' 52 terminals, and the Ground Mobile Forces 31 tactical terminals. The last DSCS II satellite was decommissioned on 13 December 1993, however design of the Defense Satellite Communications System Phase III (DSCS III) began in 1974.
The Space Systems Division, and later Space and Missile Systems Organization, also provided support to a number of experimental satellites, predominantly the Lincoln Experimental Satellite series, which would often piggyback on military launch missions. SAMSO also contracted Hughes Aircraft Company for the TACSAT communications satellite, which provided both UHF and SHF communications. However, due to funding restrictions it was limited to a single spacecraft and launched on a Titan IIIC from Cape Kennedy Air Force Station on 9 February 1969. TACSAT supported the Apollo 9 recovery efforts, directly linking the USS Guadalcanal with the White House. The Navy, impressed by the success of TACSAT, began development with SAMSO on the Fleet Satellite Communications System (FLTSATCOM). While the Navy provided funding and ground terminals, SAMSO served as the Navy's agent in spacecraft areas and received a portion of the spacecraft's capabilities, forming the Air Force Satellite Communications System (AFSATCOM), which was used to provide global communications for Single Integrated Operations Plan nuclear forces.
Deployment of military weather observation satellite systems
The idea of weather satellites had, much like communications satellites, been conceptualized of by early science fiction authors like Arthur C. Clarke. Weather forecasting had been a crucial military capability since ancient times, however rarely were meteorologists able to gather observations over land controlled by a hostile adversary and there was almost always a lack of coverage over the open ocean. Following the Second World War, the 1946 RAND report predicated that weather observations over enemy territories would be the most valuable capability provided by satellites. By 1961, the Space Systems Division and the Aerospace Corporation began studying requirements for military weather satellites, however NASA had received authority to develop weather satellites for all governmental users, including the military.
NASA's Television Infrared Observation Satellite (TRIOS) was designed to provide weather observation data to the United States Weather Bureau, launching TIROS-1 from Cape Canaveral Air Force Station Launch Complex 17 on a Air Force Ballistic Missile Division Thor-Able booster on 1 April 1960. With the success of TRIOS, the Department of Defense, Department of Commerce, and NASA convened to develop a single weather system that would satisfy the needs of both civilian and military users, agreeing to the National Operational Meteorological Satellite System in April 1961. NASA and the National Oceanic and Atmospheric Administration jointly developed the Nimbus program second-generation weather satellites for meteorological research and science.
Ultimately, in 1963 the Aerospace Corporation recommended that the military develop its own weather satellite system. The Space Systems Division began development of the Defense Satellite Applications Program (DSAP), however because DSAP was intended to provide direct support to Strategic Air Command and the National Reconnaissance Office, its existence remained classified until 17 April 1973 when it was decided by secretary of the Air Force John L. McLucas to use its weather data to support the Vietnam War and prove declassified data to the Department of Commerce and scientific community. In December 1973, it was renamed the Defense Meteorological Satellite Program (DMSP).
Initial DMSP satellites transmitted data to Fairchild Air Force Base, Washington and Loring Air Force Base, Maine, from there sent to the Air Force Global Weather Center at Offutt Air Force Base. Tactical data was passed to mission planner in Vietnam, while auroral data was given to the Air Force Cambridge Research Laboratory and the National Oceanic and Atmospheric Administration for scientific research. The fifth block of DMSP satellites were launched on Thor-Burner rockets into polar orbits and in 1973 it became a tri-service program, adding Army and Navy participation.
The DMSP was operated by Strategic Air Command's 4000th Support Group beginning on 1 February 1963. The 4000th Support Group was reassigned to SAC's 1st Strategic Aerospace Division on 1 January 1966, being renamed the 4000th Aerospace Application Group on 1 January 1973 and then the 4000th Satellite Operations Group on 3 April 1981.
Development of the Global Positioning System
Satellite navigation systems were based upon radio navigation systems such as LORAN, however terrestrial systems could only provide positioning in two-dimensions at limited ranges, while space-based systems could provide up to three-dimesons, plus velocity, anywhere on the Earth. On 13 April 1960, and Air Force Ballistic Missile Division Thor-Ablestar launched the Navy's first Transit navigation satellite into orbit. Transit was designed to provide 600 feet accuracy for naval ships and ballistic missile submarines, however it was too slow and intermittent to provide the precise requirements needed for high-speed aircraft and ground launched missiles.
In 1963, the Aerospace Corporation prompted the Space Systems Division to begin work on Project 621B (Satellite System for Precise Navigation), which was intended to provide accurate, all-weather positioning data anywhere on Earth. At the same time, the Navy began work on Timation as a follow-on to Transit. The first Timation satellites were launched by the Space and Missile Systems Organization in 1967 and 1969. The Army also developed SECOR satellite system. In 1968, the Defense Department organized the Navigation Satellite Executive Committee as a tri-service committee to coordinate the various programs. In 1972, Air Force Colonel Bradford W. Parkinson worked to combine SAMSO's Project 621B and the Navy's Timation program and on 17 April 1973, Deputy Secretary of Defense Bill Clements unified them under the SAMSO-led Defense Navigation Satellite Development Program. The program adopted the Air Force's signal structure and frequencies and the Navy's orbital deployment plan and usage of atomic clocks. On 2 May 1974, the program was renamed the Navstar Global Positioning System (GPS).
Deployment of space-based missile warning systems
Generally considered to be the most important military space program, national reconnaissance programs were assigned to the National Reconnaissance Office since 1961, however strategic missile warning programs remained within the military services, specifically the Air Force. The Space Systems Division managed two of these programs: the Missile Defense Alarm System (MIDAS) satellites, which used infrared sensors to detect missile or rocket launches, and Vela Hotel satellites, which detected atmospheric and outer space nuclear detonations to monitor compliance with the Partial Nuclear Test Ban Treaty. Both of these mission sets were predicted in the 1946 Project RAND report.
Project Vela development was started by the Advanced Research Projects Agency in response to international conferences and congressional hearings. A joint Defense Department-United States Atomic Energy Commission program, Project Vela consisted of three segments: Vela Uniform, which detected underground or surface detonations using seismic monitors, Vela Sierra, which used ground-based sensors to detect above-surface detonations, and Vela Hotel, which consisted of a constellation of space-based sensors for atmospheric and exoatmospheric nuclear detonations. ARPA assigned the Air Force Ballistic Missile Division responsibility for the spacecraft and boosters, while Atomic Energy Commission laboratories provided instrumentation and the Lawrence Radiation Laboratory provided the sensors. The first test launches were authorized on 22 June 1961 by ARPA, launching on Atlas-Agena boosters. On 16 October 1963, the first operational Vela Hotel satellites were launched, with a second pair following on 17 July 1964. In the 1970s, dedicated Vela Hotel satellites were phased out and were replaced with the Integrated Operational Nuclear Detonation Detection System (IONDS), which were placed on Defense Support Program and Navstar Global Positioning System satellites as secondary payloads.
Unlike Vela Hotel, MIDAS experienced a number of problems due to its complexities. In Fall 1960, General Laurence S. Kuter, commander-in-chief of North American Air Defense Command and Lieutenant General Joseph H. Atkinson, commander of Air Defense Command, urged chief of staff of the Air Force General Thomas D. White to accelerate and expand the troubled MIDAS program. This led to tension between the Air Force Ballistic Missile Division, under Air Research and Development Command, which sought to continue its research and development, while Air Defense Command wanted to operationally employ it as early as possible. The final MIDAS development plan put forward by AFBMD on 31 March 1961, scheduled twenty-seven development launches and initial operating capability in January 1964. On 16 January, the Joint Chiefs of Staff approved NORAD as the operational command with Air Defense Command as service command. Air Defense Command called for eight satellites in two orbital rings, ensuring constant coverage of the Soviet Union, with sensor data transmitted to the Ballistic Missile Early Warning System (BMEWS) radar sites, then relayed to the NORAD command post in the Cheyenne Mountain Complex. In summer 1961, director of defense research and development Harold Brown conducted a review of the MIDAS program, expressing concern that it could detect light ballistic missiles and submarine launched ballistic missiles. After several test flights, the MIDAS program was reduced to a research and development program, however progress was made in 1963 when MIDAS satellites successfully detected nine launches of LGM-30A Minuteman and UGM-27 Polaris solid-fuel missiles and SM-65 Atlas and HGM-25A Titan I liquid-fuel missiles. Due to defense budget cuts and technological obsolescence, the MIDAS program was ended in 1966 without becoming an operational system.
MIDAS was replaced by the Defense Support Program (DSP) in August 1966. DSP was originally intended to monitor the Soviet Strategic Missile Forces Fractional Orbital Bombardment System nuclear weapons system, however it was also developed as a replacement for the ground-based Ballistic Missile Early Warning System. In November 1970 the first DSP satellite was launched on a Titan IIIC. The primary ground station was at the Air Force Satellite Control Facility, with a secondary ground station constructed in April 1971 at RAAF Woomera Range Complex in Australia, with an additional ground station constructed at Buckley Air National Guard Base in Colorado. Operational control of the Defense Support Program was conducted by the North American Air Defense Command, with the newly christened Aerospace Defense Command conducting day to day operations.
Space defense operations
Following the debate between the Air Force and Navy over operational control of the Space Detection and Tracking System (SPADATS), on 7 November 1960, the Joint Chiefs of Staff operational command of SPADATS to North American Air Defense Command (NORAD) and Continental Air Defense Command (CONAD). The Air Force component, Spacetrack, was operationally assigned to Air Defense Command. On 14 February 1961, the 1st Aerospace Surveillance and Control Squadron (assigned to the 9th Aerospace Defense Division on 1 October 1961 and renamed the 1st Aerospace Control Squadron on 1 July 1962) was activated to operate the SPADATS data collection and catalog center as part of NORAD's Space Defense Center at Ent Air Force Base, assuming the responsibilities of the Interim National Space Surveillance and Control Center. On 1 February 1961, NORAD assumed operational command of the Navy Space Surveillance System (NAVSPASUR) system and its data tracking facility at Naval Support Activity Dahlgren.
The 9th Aerospace Defense Division had responsibility for all Air Defense Command space forces, including the Missile Defense Alarm System, Ballistic Missile Early Warning System, Space Detection and Tracking System, NORAD Combat Operations Center, the Bomb Alarm System, and the Nuclear Detonation System.
Space surveillance operations were conducted by the 73rd Aerospace Surveillance Wing starting on 1 January 1967. Prior to the standup of the 73rd Aerospace Surveillance Wing, squadrons reported directly to the 9th Aerospace Defense Division. The initial Spacetrack sensors included the Millstone Hill Radar at the Massachusetts Institute of Technology and Baker-Nunn cameras at the Smithsonian Astrophysical Observatory, Edwards Air Force Base, Johnston Atoll, and Oslo, Norway. By 1965 the system would grow to include Air Defense Command's AN/FPS-17 and AN/FPS-80 radars operated by the 16th Surveillance Squadron at Shemya Air Force Base, AN/FPS-17 and AN/FPS-79 radars operated by the 19th Surveillance Squadron at Pirinçlik Air Base, the AN/FPS-49 Ballistic Missile Early Warning System (BMEWS) prototype operated by the 17th Surveillance Squadron at Moorestown, New Jersey, and the AN/FPS-50 BMEWS prototype at Trinidad Air Base. The FSR-2 electo-optical system at Cloudcroft Observatory, New Mexico and the AN/FPS-85 radar at operated by the 20th Space Surveillance Squadron at Eglin Air Force Base joined the United States Space Surveillance Network in 1967. In the 1970s, the Baker-Nunn network was replaced by the Ground-based Electro-Optical Deep Space Surveillance (GEODSS) network, with locations at Socorro, New Mexico, Diego Garcia, the Maui Space Surveillance Complex, and Morón Air Base. The Baker-Nunn network and GEODSS system was operated by the 18th Space Surveillance Squadron. Other space surveillance squadrons included the 2nd Surveillance Squadron (Sensor)
Air Defense Command's Ballistic Missile Early Warning System, operated by the 71st Surveillance Wing, also provided supplemental space surveillance. These included the radars operated by the 12th Missile Warning Squadron at Thule Air Base, 13th Missile Warning Squadron at Clear Air Force Station, and the Royal Air Force at RAF Fylingdales.
The 10th Aerospace Defense Group operated Weapon System 437, a nuclear anti-satellite Thor DSV-2 missile system.
In recognition of the importance of space defense, Air Defense Command was redesignated as Aerospace Defense Command on 15 January 1968 and the 9th Aerospace Defense Division was inactivated and replaced by the Fourteenth Aerospace Force on 1 July 1968.
Space launch fleet and ground support infrastructure
The Space Systems Division's monopoly in space launch vehicles was, in large part, the prime reason for its primacy in the space domain. Early ballistic missiles, such as the PGM-17 Thor and SM-65 Atlas (which included the SM-65A, SM-65B, and SM-65C prototypes and the SM-65D, SM-65E, and SM-65F operational missiles)the did perform adequately compared to Soviet Strategic Missile Forces ICBMs, and were rapidly phased out in favor of the solid-fueled LGM-30 Minuteman and UGM-27 Polaris missiles. However, the Atlas and Thor missiles gained new life as the backbone of the Space Systems Division's launch fleet.
The Douglas Aircraft Company Thor space launch vehicle performed its first space launch in December 1959, primarily performing space launches from Vandenberg Air Force Base's Western Test Range. Specific variants of the Thor space launch vehicle included the Thor-Able, which included an Able second stage, Thor-Ablestar, and the Thor-Delta, which is considered the first member of the Delta space launch vehicle family, the Thor-Burner, Thor DSV-2U, Thorad-Agena.
The General Dynamics Astronautics Atlas space launch vehicle was more powerful than the Thor and primarily launched heavier payloads from Cape Canaveral Air Force Station's Eastern Test Range. The SM-65B Atlas, which was also a prototype for the operational missile, performed its first space launched. A number of space launch vehicles were based on the SM-65D Atlas, including the Atlas SLV-3, which had the RM-81 Agena and Centaur upper stages, the Atlas LV-3B, which launched the final four Project Mercury spaceflights, Atlas-Agena, Atlas-Able, and Atlas-Centaur. Decommissioned SM-65E Atlas and SM-65F Atlas missiles were converted to the Atlas E/F launch vehicle. Other launch vehicles derived from the original SM-65 missile included the Atlas G, Atlas H, Atlas I, Atlas II, Atlas III, and Atlas V.
While the Thor and Atlas were considered medium boosters, the Martin Titan IIIC was considered a heavy booster and was the first rocket with the power to launch payloads into geosynchronous orbit. Its first launch was on 18 June 1965. The Titan IIIC had two other variants, including the Titan IIIB, which was originally designed to support the Manned Orbiting Laboratory, and Titan IIID. The Titan IIIE was used by NASA for interplanetary missions and the Titan IIIA was an early rocket in the family. The success of the Titan IIIC prompted some to call it the "DC-3 of the Space Age"
The Space Systems Division's primary launch sites were Cape Canaveral Air Force Station (briefly known as Cape Kennedy Air Force Station) in Florida, which managed the Eastern Test Range, and Vandenberg Air Force Base in California, which managed the Western Test Range. Cape Canaveral was selected after the end of the Second World War to be the western end of the Long Range Proving Ground and the Air Force absorbed Banana River Naval Air Station, renaming it Patrick Air Force Base, to support the missile tests there. In the 1960s, Cape Canaveral Air Force Station underwent major expansion to support NASA crewed spaceflights. Vandenberg Air Force Base was formed from the Army's Camp Cooke and briefly known as Cooke Air Force Base. Management of the Eastern Range at Cape Canaveral was initially the responsibility of the Air Force Eastern Test Range while launches were performed by the 6555th Aerospace Test Wing. Vandenberg AFB was used for testing ICBMs and IRBMs, forming part of the Pacific Missile Range, and was selected for polar launches. In 1971, Vandenberg was selected to perform near-polar Space Shuttle launches. Management of the Western Range at Vandenberg was initially the responsibility of the Air Force Western Test Range while launches were performed by the 6595th Aerospace Test Wing.
The second effort, the Air Force Satellite Control Facility, consisted of a global system of tracking, telemetry, and control stations, with its central control facility located in California. The first Air Force Ballistic Missile Division tracking stations were set up in 1958 at the Kaena Point Satellite Tracking Station, in 1959 at Vandenberg Air Force Base and New Boston Air Force Station, in 1961, at Thule Air Base, Greenland, in 1963 at Mahé, Seychelles, and in 1965 at Andersen Air Force Base, Guam. The control center in California was first referred to as the Air Force Satellite Test Center and the 6594th Test Wing (later redesignated as the 6594th Aerospace Test Wing) operated the facility at Onizuka Air Force Station.
This enhanced focus on space resulted in a number of organizational changes, including the consolidation of the Eastern and Western Test Ranges under Air Force Systems Command's National Range Division in January 1964, the transfer of the Pacific Missile Range from the Navy to the Air Force and the Air Force's assumption of responsibility for the satellite tracking network in 1963. The National Range Division, headquartered at Patrick Air Force Base, established the Air Force Space Test Center at Vandenberg Air Force Base to manage all Pacific range activities. In January 1964, the National Range Division also gained responsibility for the Air Force Satellite Control Facility. This change was reversed in July 1965, with the Space Systems Division regaining responsibility for the Satellite Control Facility. The establishment of the Space and Missile Systems Organization in 1967 resulted in the formation of the Space and Missile Test Center on 1 April 1970 at Vandenberg Air Force Base, consolidating all Western Range activities under SAMSO. This consolidation was completed in 1977, when the Eastern Test Range was assigned to SAMSO.
Space forces in the Vietnam War
The first combat employment of space forces occurred during the Vietnam War. In particular, weather and communications satellite support was considered critical by ground and air commanders.
The Defense Meteorological Support Program, in particular, proved absolutely critical to the Seventh Air Force, which relied upon cloud-free environments to provide low-level fighter, tanker, and gunship operations. Starting in 1965, Strategic Air Command began providing DMSP information to Air Force planners, with NASA providing information from their Nimbus satellites. The Navy was unable to receive DMSP data until 1970, when the USS Constellation gained the proper readout equipment. Specific operations that were supported by space forces through DMSP support included the Navy's destruction of the Thanh Hóa Bridge in North Vietnam and Operation Ivory Coast by Army Special Forces and Air Commandos to rescue American prisoners of war in 1970.
Satellite communications support began in June 1966, with a terminal being activated at Tan Son Nhut Air Base using NASA's Synchronous Communications Satellite to communicate with Hawaii. Initial Defense Communication Satellite Program terminals were installed in Saigon and Nha Trang in July 1967, enabling the transmission of high-resolution photography between Saigon and Washington D.C., enabling intelligence analysts and national leadership to assess near-real-time battlefield intelligence. Commercial satellite communications support was also provided by COMSAT.
Air Force Space Command (1982–2019)
Inactivation of Aerospace Defense Command
Despite the rapid development in military space forces within the Air Force, there was no centralized command for them. Air Force Systems Command was responsible for research, development, and procurement, flying the Defense Satellite Communications System satellites for the Defense Communications Agency and other pre-operational constellations, as well as executing space launch and managing the Air Force Satellite Control Facility; Aerospace Defense Command (ADCOM) was responsible for space surveillance and missile defense for North American Air Defense Command; and Strategic Air Command was responsible for flying the Defense Meteorological Support Program. Following a change in nuclear posture, NORAD's primary mission shifted from active defense against a nuclear attack to surveillance and warning of an impending attack, resulting in Aerospace Defense Command a major reorganization. ADCOM's atmospheric interceptors were cut, replaced with space-based warning systems, increasing their profile within NORAD.
The development of the Space Shuttle began as a joint Defense Department-NASA program, with the Space and Missile Systems Organization serving as the Defense Department's executive agent on the program. The Space Shuttle promised a reusable spacecraft and an end to costly expendable launch vehicles, as well as a way to reinvigorate the Air Force's place within space. Military requirements were taken into account while designing the Space Shuttle orbiter, dictating the size of the payload bay. Ultimately, the Space Shuttle was intended to replace all but the smallest and largest expendable space launch vehicles. The Defense Department and NASA jointly chose Kennedy Space Center and Vandenberg Air Force Base's Space Launch Complex 6 as shuttle launch locations. To centralize U.S. military requirements for the shuttle the Defense Department Space Shuttle User Committee was established in November 1973 and chaired by the Air Staff's director of space.
The military application of the shuttle, and the increased space mission for Aerospace Defense Command prompted an internal competition among the Air Force's major commands for the space mission starting in 1974. Air Force Systems Command, through the Space and Missile Systems Organization, was the lead for space research, development, launch, and procurement. This resulted in SAMSO having development responsibilities for the Space Shuttle, however due to poorly defined lines separating experimental from operational space, SAMSO also had an operational space role. Aerospace Defense Command sought operational responsibility for the Space Shuttle due to its experience as the Air Force's primary operational space command and control of the United States Space Surveillance Network. Military Airlift Command and Strategic Air Command also attempted to claim operational responsibility for the shuttle. This debate over the shuttle, and later the Global Positioning System, prompted the Defense Department and Air Force to begin to reevaluate if space systems should be assigned to commands on an individual basis, as was current practice, or if they should be centralized in a single command.
Despite this fragmentation, operational space systems were being developed and deployed at an ever expansive rate. In February 1977, the Defense Communications System authorized the Space and Missile Systems Organization to begin development on the Defense Satellite Communications System Phase III (DSCS III), with an expected operational date of 1981 to 1984. The Navstar Global Positioning System (GPS) development also was accelerating, and by 1981 five test satellites were on-orbit and supporting Navy requirements. Aerospace Defense Command's Defense Support Program was providing constant surveillance of Soviet Strategic Missile Forces and Chinese People's Liberation Second Artillery Corps rocket launches. The Aerospace Defense Command's Space Detection and Tracking System (SPADATS) continued to expand, adding the AN/FPS-108 Cobra Dane radar at Shemya Air Force Base in 1977 and in 1982 incorporating the AN/FPS-115 PAVE PAWS radars operated by the 7th Missile Warning Squadron at Beale Air Force Base and the 6th Missile Warning Squadron at Cape Cod Air Force Station into SPADATS. In the early 1980s, the Ground-based Electro-Optical Deep Space Surveillance System began to fully replace the Baker-Nunn telescopes. The space surveillance system highlighted the divide between the space communities, with Aerospace Defense Command's Space Detection and Tracking System focused almost entirely on fulfilling NORAD requirements, while Air Force Systems Command's satellite infrastructure focused on research and development.
Renewed Soviet anti-satellite missile tests and co-orbital in 1976 by the Soviet Air Defense Forces and Strategic Missile Forces added to the nighened sense of urgency regarding space. The United States had no anti-satellite capability, having decommissioned Aerospace Defense Command's Program 437 in 1975, which had been put into standby status in 1970. In Fall 1976, Present Ford authorized development on what would become the McDonnell Douglas F-15 Eagle-launched ASM-135 ASAT and Aerospace Defense Command began a reevaluation of its space defense capabilities.
In 1977, the Air Staff released the Proposal for a Reorganization of USAF Air Defense and Space Surveillance/Warning Resources, known informally as the Green Book Study. It marked the beginning of the end for Aerospace Defense Command, calling for its inactivation and the transition of its air defense mission to Tactical Air Command, its communications assets (not satellite communications, which were operated by Air Force Systems Command) to Air Force Communications Command, and its space assets to Strategic Air Command. General James E. Hill attempted to fight its inactivation, highlight its bi-national nature and advocating for Aerospace Defense Command to become a Space Defense Command. United States Under Secretary of the Air Force Hans Mark also was concerned with inactivating Aerospace Defense Command, objecting to merging Aerospace Defense Command's defensive systems with the offensive systems of Strategic Air Command, which the Canadians were opposed to. Moreover, he expressed concern that space systems modernization would not receive sufficient attention with Strategic Air Command, whose primary focus was offensive nuclear bombers and missiles. Under secretary Mark lobbied, ultimately unsuccessfully, to have Aerospace Defense Command become the primary space command within the Air Force. General Hill also argued to his fellow generals that the Air Force required a space operations command – and that Aerospace Defense Command fit that role perfectly. Ultimately, Air Force leadership did not seem to understand the importance of space, instead forming an Air Staff group to examine the feasibility of a future space command. The Space Mission Organization Planning Executive Committee was appointed by Air Force chief of staff General Lew Allen in November 1978 to examine all aspects of the space mission. Among the analysts was then Lieutenant Colonel Thomas S. Moorman Jr., future commander of Air Force Space Command and vice chairman of the Joint Chiefs of Staff. The study proposed a central space command, however General Allen did not favor centralization. On 31 March 1980, Aerospace Defense Command was inactivated as an Air Force major command (although left existent as a specified combatant command until 16 December 1986). In 1980, its space activates were transferred to Strategic Air Command.
Establishment of Air Force Space Command
On 1 October 1979, the Space and Missiles Systems Organization was split, establishing the Space Division and Ballistic Missile Office. This change was in part due to the strain put on SAMSO for developing the Space Shuttle and the LGM-118 Peacekeeper concurrently. The reorganization also resulted in the subordination of the Eastern Range and Patrick Air Force Base to the Eastern Space and Missile Center and the Western Range and Vandenberg Air Force Base to the Western Space and Missile Center, both of which were subordinated under the Space and Missile Test Organization. Air Force Systems Command also established a deputy commander for space operations, who was made responsible for all non-acquisitions space functions, including liaising with NASA and the integration and operational support of military shuttle payloads. In preparation for classified shuttle operations, Air Force Systems Command activated the Manned Spaceflight Support Group at Johnson Space Center. Ultimately, the Manned Spaceflight Support Group was intended to transition into the Air Force's own Shuttle Operations and Planning Complex at the Consolidated Space Operations Center. The Air Force Satellite Control Facility was transitioned from the Space Division to report directly to the Systems Command deputy commander for space operations. In 1979, Air Force doctrine recognized space as a mission area for the first time, and in 1981 the Air Staff Directorate for Space Operations was created within the Deputy Chief of Staff for Operations, Plans, and Readiness.
In 1981, Representative Ken Kramer introduced a resolution in the House of Representatives that would have renamed the Department of the Air Force and the United States Air Force into the Department of the Aerospace Force and United States Aerospace Force, respectively. This proposal made Air Force leadership extremely uncomfortable, changing the Air Force's legislative mandate to: "be trained and equipped for prompt and sustained offensive and defensive operations in air and space, including coordination with ground and naval forces and the preservation of free access to space for U.S . spacecraft" and called upon the Air Force to create a space command. Under the pressure and passivity that President Reagan would propose an intendent space force be created, the Air Force relented, committing to the establishment of a major command for space, briefly considering a organizational relationship where the commander of the Space Division would also be duel-hatted as the Aerospace Defense Command deputy commander for space.
On 1 September 1982, Space Command was established at Peterson Air Force Base, with General James V. Hartinger triple hatted at the commander of Space Command, NORAD, and Aerospace Defense Command. The commander of Air Force Systems Command's Space Division would serve as Space Command's vice commander. At the same time, the Space Technology Center was established at Kirtland Air Force Base to consolidate the three Air Force Systems Command laboratories working on space-related research on geophysics, rocket propulsion, and weapons. It was the intent of the Air Force that Space Command would grow to become a unified combatant command, which was necessary to gain the support of the United States Navy.
The creation of Space Command on 1 September 1982 marked the beginning of the centralization of space into a single organization, which would culminate under its direct successor, the United States Space Force. In late 1982 and early 1983, Strategic Air Command began to transfer its 50 space activities to Space Command, including Space Command's headquarters at Peterson Air Force Base, Thule Air Base and Sondrestrom Air Base in Greenland, Clear Air Force Station, the Defense Meteorological Support Program, Defense Support Program, as well as the Military Strategic and Tactical Relay (Milstar) and the Global Positioning System, which were in development and acquisitions phase.
Milstar was intended to provide communication for the National Command Authority, and to ultimately replace the Navy's Fleet Satellite Communications System and Air Force Satellite Communications. The first Defense Satellite Communications System Phase III began to launch in 1982, beginning their replacement of the Defense Satellite Communications System Phase II. The Navstar Global Positioning System was nearing the end of its prototyping and validation phase when turned over to Space Command in 1984, with 7 Block I satellites on orbit. While Strategic Air Command willingly turned over its space systems, it attempted to maintain a voice in their administration, ultimately failing. Air Force Systems Command also attempted to maintain much of its space role through the Space Division, despite being a research and development command. It took until 1987 for Air Force Systems Command to transition the Air Force Satellite Control Network to Air Force Space Command (renamed from Space Command on 15 September 1985 to distinguish itself from United States Space Command) and the Consolidated Space Operations Center only became operational in March 1989.
On 23 September 1985, United States Space Command (USSPACECOM) was established as a functional unified combatant command for military space operations. From a bureaucratic perspective, the creation of U.S. Space Command was perquisite in gaining Army and Navy support for Air Force Space Command. The creation of U.S. Space Command also received significant support from President Reagan, who was pursuing the Strategic Defense Initiative ballistic missile defense system, which was dependent on space-based sensors and interceptors. The primary service component of U.S. Space Command was Air Force Space Command (AFSPC or AFSPACECOM), while the Navy established Naval Space Command (NAVSPACECOM) shortly after on 1 October 1983. The Army's component was smaller, first consisting of the Army Space Planning Group from 1985 to 1986, before being upgraded to the Army Space Agency in 1986, and finally establishing Army Space Command in 1988. At the activation ceremony was retired chief of naval operations Admiral Arleigh Burke, who had unsuccessfully lobbied for a unified space command twenty-five years prior. The commander of U.S. Space Command was triple-hatted as the commander of Air Force Space Command and of the bi-national North American Aerospace Defense Command. USSPACECOM assumed from NORAD the missile warning and space surveillance missions, as well as Cheyenne Mountain Air Force Station's Missile Warning Center and Space Defense Operations Center.
The Space Shuttle Challenger disaster caused significant concern within Air Force Space Command, as the Space Shuttle, which Air Force Space Command was operationally responsible for during military launch missions, was intended to be its primary space launch vehicle. Programs such as the Navstar Global Positioning System and Defense Support Program improvements suffered significant setbacks, and Air Force Systems Command's Space Division's expendable boosters were the only means of accessing space. Titan 34D, Titan IV, and Delta II space launch vehicles became the workhouse of the Space Division's launch fleet. In 1987, General John L. Piotrowski, SPACECOM commander, began to argue that the space launch mission needed to be transferred from the Space Division to Air Force Space Command, enabling U.S. Space Command to directly request launch operations during wartime. In December 1988, the Air Force announced that it would intend to consolidate space launch operations from Air Force Systems Command to Air Force Space Command. On 1 October 1990, Air Force Systems Command transferred Cape Canaveral Air Force Station, Patrick Air Force Base, Vandenberg Air Force Base, the Western Range and Eastern Range, and the Delta II and Atlas E/F launch missions. The remaining Atlas II, Titan 23G, and Titan IV missions were transferred over the following months. The Space Division also reassumed its former name of the Space Systems Division.
Space forces in the Gulf War
Although the Vietnam War was the first war which space forces supported, the Gulf War has sometimes been referred to as the first space war because of the crucial role that space forces played in supporting land, air, and maritime forces. Prior to the Gulf War, most space forces were focused on strategic nuclear deterrence, not on support to tactical forces. Space forces, specifically satellite communications forces, had been providing support to tactical forces during the 1982 Falklands War, the 1983 United States invasion of Grenada, and provided real time mission planning data to strike aircraft in the 1986 United States bombing of Libya. The first use of the Navstar Global Positioning System occurred in the 1988 Operation Earnest Will, and during the United States invasion of Panama, Air Force Space Command provided communications through the Defense Satellite Communications System and weather support through the Defense Meteorological Satellite Program. In contrast, the Gulf War utilized the full range of U.S. space forces, with over sixty satellites providing 90% of theater communications and command and control for an army of 500,000 troops, weather support for mission planners, early warning of Iraqi Scud missile launches, and navigation support to terrestrial forces.
At the beginning Operation Desert Shield, the defensive and preparation phase of the war, military communications satellites only provided support for an American administrative unit in Bahrain and two training groups in Saudi Arabia, and no weather, navigation, early warning, or remote sensing support was typically tasked to United States Central Command, requiring time for space forces to be assigned to the region. Iraq possessed no space forces of its own, contracting satellite communications with Intelsat, Inmarsat, and Arabsat, however its military leadership made no effort to integrate space into its military planning.
Analysts after the war stated that satellite communications forces provided an absolutely crucial capacity, as much of the desert did not have reliable telecommunication networks. Satellite communications carried over 90% of all communications for the military campaign, with commercial satellites accounting for 24% of the traffic. Coalition forces received communications satellite support from Air Force Space Command's Defense Satellite Communications System, Naval Space Command's Fleet Satellite Communications System, the NATO III communication satellites, and Royal Air Force's Skynet satellite system. In August 1990, the DCSC network consisted of two DCSC II satellites and three DCSC III satellites, with one DSCS III in reserve and two DSCS II satellites for limited operational use. However, there were concerns if the DSCS network could provide the requisite capacity and concerns that satellite communications would be jammed by Iraqi forces, resulting in the reallocation of spacecraft by the 3rd Satellite Control Squadron, which flew the constellation.
The Navstar Global Positioning System was the most widely known space system used during the war. The first five operational Block I spacecraft were launched on a Delta II rocket in 1989, joining the prototypes in orbit. The Gulf War accelerated the program, and by 22 August 1990, the constellation consisted of fourteen satellites (six Block I prototypes and eight Block II operational satellites). Launches of two Block II satellites on 2 October and 26 November increased the constellation to 16 satellites right before the commencement of Operation Desert Storm. Army Space Command had purchased 500 demonstration GPS trackers, providing them to fielded forces in August. The Army soon realized the critical navigation capability they provided to its ground forces, and put in an emergency requisition for 1,000 GPS receivers and 300 vehicle instillation kids. Later, in December, they requested 7,178 GPS receivers.
Coalition commanders also understood the importance of weather and earth monitoring satellite data in the Gulf region. Weather support was provided by Air Force Space Command's three Defense Meteorological Satellite Program (DMSP) spacecraft, National Oceanic and Atmospheric Administration's two Television Infrared Observation Satellite and Geostationary Operational Environmental Satellites. Coalition forces also received weather data from the Japan Meteorological Agency's Himawari satellites, the European Organisation for the Exploitation of Meteorological Satellites' two Meteosats, and Soviet Union's twelve Meteor satellites. Air Force Space Command's DMSP was considered to be the most useful of the coalition space weather systems, with DMSP terminals provided to Army ground forces and installed on Navy carriers and flagships. Earth imaging data was provided by the United States Geological Survey's Landsat 4 and Landsat 5 spacecraft, as well as the French CNES SPOT satellites. Coalition leadership were concerned that Iraq would attempt to gain imaging data, and convinced Landsat and SPOT to not make any available for purchases to Iraq. The Air Force used Landsat data in the construction of airfields, however, both the Air Force and the Marine Corps preferred to use SPOT in mission planning and rehearsal.
Space-based early warning provided by Air Force Space Command's Defense Support Program (DSP) proved critical in detecting Iraqi Scud ballistic missile strikes against coalition forces and Israel. In August 1990, the DSP constellation consisted of three operational satellites and two spares.
After Iraq ignored the U.N. ultimatum to withdraw from Kuwait, Operation Desert Storm, the offensive portion of the campaign, commenced. The Defense Satellite Communications System performed over 700,000 transactions per day and enabled immediate updates of the Air Tasking Order (ATO). Over 1,500 satellite communications terminals were assigned in theater. DCSC itself provided over 50% of all satellite communications requirements, providing the ATO to every air base and carrier. The Navstar Global Positioning System enabled the Army's Left Hook across the Iraqi desert and positioning data provided accuracy to special forces, artillery, and strike aircraft that had never been achieve before in the history of warfare. GPS specifically enabled the Boeing B-52 Stratofortress to perform all weather raids, provided precise coordinates for cruise missile strikes in Baghdad, and enabled Army Apache helicopters to create major gaps in the Iraqi air defense networks. DMSP data provided accurate weather reports that enabled the use of precision laser-guided munitions, tracked rain and sandstorms, and provided updates on oil fires, oil spills, and the possible spread of chemical agents. Defense Support Program satellites provided early warning to Army Air Defense Artillery MIM-104 Patriot missiles.
The Space Commission
The lessons learned during the Gulf War resulted in a renaissance for military space forces, who saw their profile rise within the U.S. Armed Forces. Not burdened by being part of separate air commands, Air Force Space Command began to define its mission sets under the categories of space control, force application, force enhancement, and space support. Space control operations were intended to maintain the ability to use space, while denying an adversary the ability to do the same, to include the development of anti-satellite weapons like the ASM-135 ASAT. Force application was defined as fire support operations from space, such as ballistic missile defense and power projection operations against terrestrial targets. Elements from the Strategic Defense Initiative, such as Brilliant Pebbles and Brilliant Eyes promised a more aggressive military role for space. Space programs and systems continued to develop, including the completion of the 24-satellite Navstar Global Positioning System constellation in 1993, the development of the Space-Based Infrared System to replace the Defense Support Program, and the first launches of Milstar.
In 1992, Air Force Systems Command was merged with Air Force Logistics Command, becoming Air Force Materiel Command, resulting in the Space Systems Division becoming the Space and Missile Systems Center. With the completion of the GPS constellation in space acquisitions shifted to replacing aging spacecraft. In 1994, SMC began the development of the Space-Based Infrared System (SIBRS), a missile warning constellation that would serve as the successor of the Defense Support Program (DSP). Milstar also had a replacement system under works, with the Advanced Extremely High Frequency (AEHF) satellite communications constellation contracted in 1999. A year later, SMC issued a contract for the Wideband Global SATCOM (WGS) to replace the Defense Satellite Communications System (DSCS). The Space and Missile Systems Center began development of a new generation of launch vehicles, with the Atlas III procured in 1999. The Evolved Expendable Launch Vehicle program was contracted in 1995, resulting in the Delta IV and Atlas V space launch vehicles.
Despite the rising prominence of space forces during the Gulf War, a number of prominent generals within the Air Force sought to merge air and space operations into a seamless aerospace continuum. This attracted the ire of Congress, who saw the Air Force attempting to subordinate space to its aviation component, establishing the commission to Assess United States National Security Space Management and Organization to investigate. Senator Bob Smith, in particular took issue with the Air Force's management of space and began to propose an independent space force.
Chaired by former secretary of defense Donald Rumsfeld, the 2001 Space Commission recommended that the commander of United States Space Command should cease to be exclusively granted to military pilots and that the entire armed forces needs to cease with the practice of assigning terrestrial combat leaders with little space experience to top space posts. In particular, it noted that of 150 personnel service in space leadership, fewer than 20% had a space background, with the majority of the officers drawn from the pilot, air defense artillery, or nuclear and missile operations careers, and that the average had only spend 2.5 years of their careers in space positions. The commission also came to the conclusion that the Air Force was not appropriately developing an independent space culture or education program and was not paying sufficient budgetary attention to space. The commission stated that: "“Few witnesses before the commission expressed confidence that the current Air Force organization is suited to the conduct of these [space] missions...Nor was there confidence that the Air Force will fully address the requirement to provide space capabilities for the other services. Many believe the Air Force treats space solely as a supporting capability that enhances the primary mission of the Air Force to conduct offensive and defensive air operations. Despite official doctrine that calls for the integration of space and air capabilities, the Air Force does not treat the two equally. As with air operations, the Air Force must take steps to create a culture within the service dedicated to developing new space system concepts, doctrine, and operational capabilities." Ultimately, the Space Commission recommended the creation of a separate Space Force as a military branch in the long term, with the establishment of a Space Corps, analogous to the Army Air Forces within the Air Force in the between 2007 and 2011.
Space forces in the Global War on Terrorism
The promise of a separate Space Corps or Space Force in the 2010s was cut short by the September 11 attacks, which reoriented the focus of the United States from emerging military powers like the People's Republic of China to the Global War on Terrorism against violent non-state actors. Air Force Space Command provided direct support to Operation Enduring Freedom, enabling satellite communications, global positioning system enhancements, and deployed personnel to support counterterrorism operations. For Operation Iraqi Freedom, the Air Force Space Command deployed space operators to forward operating bases in the Middle East and the Defense Satellite Communications System Phase III provided 80% of bandwidth for allied forces in theater, while 85% of Milstar communications capacity was directed towards support of tactical forces.
The 2001 Space Commission report was largely forgotten within the Air Force, replaced by the more pressing requirements of fighting low-end terrorist organizations. None of the White House-level recommendations of having the president declare military space as a top national priority, creating a presidential advisory group for national security space, or appoint an interagency group for space have occurred. Within the Department of Defense the recommendations of creating an under secretary of defense for space, intelligence, and information or putting space programs in a distinct funding category went unheeded. United States Space Command was folded into United States Strategic Command to make way for United States Northern Command, which was responsible for nuclear warfare and deterrence, further diluting military space leadership. Within U.S. Strategic Command, space responsibilities were absorbed into the Joint Functional Component Command for Space and Global Strike, in 2006 replaced by the Joint Functional Component Command for Space, and in 2017, be reorganized as the Joint Force Space Component Commander.
Some specific recommendations did get implemented, however, with the Air Force acting on the recommendation that space operations and acquisitions should be centralized under one major command, transitioning the Space and Missile Systems Center from Air Force Materiel Command to Air Force Space Command on 1 October 2001. During the waning days of Air Force Space Command, it was organized into the Fourteenth Air Force, which consisted of the 21st Space Wing for space control and missile warning, 50th Space Wing for space operations, 460th Space Wing for overhead persistent infrared operations, and the 30th Space Wing and the 45th Space Wing for space launch and range support, while the Space and Missile Systems Center served as its acquisitions arm.
Following the inactivation of U.S. Space Command in 2002, Russia and China began developing sophisticated on-orbit capabilities and an array of counter-space weapons. In particular, China conducted the 2007 Chinese anti-satellite missile test, destroying its Fengyun spacecraft, which, according to NASA, created 2,841 high-velocity debris items, a larger amount of dangerous space junk than any other space event in history. On 29 August 2019, United States Space Command was reestablished as a geographic combatant command.
Independent Space Force (2019–present)
Proposals of service independence
The first real attempt to centralize the military space organizations occurred in 1958, with the Advanced Research Projects Agency sometimes being described as, and feared by its detractors that it would become, a fourth military service. While the 1981 proposal to rename the United States Air Force into the United States Aerospace Force was not an attempt to create a space service branch, it did mark a clear attempt by Congress to increase the profile of space within the service, which it rejected. The possibility that President Reagan would announce the creation of the Space Force as an independent service in 1982 spurred the Air Force to establish Air Force Space Command. The 1990s saw a number of proposals for an independent space force, including one by Air Force Space Command Lieutenant Colonel Cynthia A.S. McKinley in 2000 which called for the establishment of a United States Space Guard. The most notable proposal for an independent Space Force was by the 2001 Space Commission, which called for the creation of a Space Corps within the Air Force between 2007 and 2011, and the establishment of an independent Space Force after that. The Space Commission was established by Congress after it became concerned that the Air Force was seeking to artificially merge its air and space operations into a seamless aerospace continuum, without regard for the differences between space and air. Ultimately, due to the September 11 attacks, a Space Force was not established. The United States' national security space organization actually regressed, with United States Space Command being inactivated in 2002, subsumed into United States Strategic Command. The Allard Commission report, unveiled in the wake of the 2007 Chinese anti-satellite missile test, called for a reorganization of national security space, however many of its recommendations were not acted upon by the Air Force.
Growing impatient with the Air Force, who they felt was more interested in jet fighters than space, Representatives Jim Cooper (D-TN) and Mike Rogers (R-AL) unveiled a bipartisan proposal in the House of Representatives to establish the United States Space Corps as a separate military service within the Department of the Air Force, with the commandant of the Space Corps as a member of the Joint Chiefs of Staff. This proposal was put forward to separate space professionals from the Air Force, give space a greater cultural focus, and help develop a leaner and faster space acquisitions system. This was done due to congressional concern that the space mission had become subordinate to the Air Force's preferred air dominance mission and that space officers were being treated unfairly within the Air Force, with Representative Rogers noting that in 2016 none of the 37 Air Force colonels selected for promotion to brigadier general were space officers and that only 2 of the 450 hours of Air Force professional military education were dedicated to space. The proposal passed in the House of Representatives, but was cut from the final bill in negotiations with the U.S. Senate. Following the defeat of the proposal in the Senate, both Representatives Cooper and Rogers heavily criticized Air Force leadership for not taking threats in space seriously and continued resistance to reform. The Space Corps proposal was, in large part, spurred on by the development of the People's Liberation Army Strategic Support Force and the Russian Space Forces.
The Space Corps proposal gained new life, when at a June 2018 meeting of the National Space Council, President Donald Trump directed the Department of Defense to begin the necessary processes to establish the U.S. Space Force as a branch of the Armed Forces. On 19 February 2019, Space Policy Directive 4 was signed, initially calling for the placement of the U.S. Space Force within the Department of the Air Force, before later creating and transferring the service to the Department of the Space Force.
The Space Force proposal was supported by NASA Administrator Jim Bridenstine, who has stated that a space force is critical to defending the United States' energy grid and GPS network and Secretary of the Air Force Barbara Barrett. has also endorsed a space force. Other supporters include Air Force General and commander of both United States Space Command and Air Force Space Command John W. Raymond, Navy Admiral and NATO Supreme Allied Commander James Stavridis, Air Force Colonel and astronaut Buzz Aldrin, Air Force Colonel and astronaut Terry Virts, Marine Corps Colonel and astronaut Jack R. Lousma, astronaut David Wolf, astronaut Clayton Anderson, CNN Chief National Security Correspondent Jim Sciutto, and SpaceX CEO Elon Musk.
In May 2019, a group of 43 former military, space, and intelligence leaders unaffiliated with the current administration released an open letter calling for a space force. Signatories include former Secretary of Defense William Perry, former Directors of National Intelligence Admiral Dennis C. Blair and Vice Admiral John Michael McConnell, former Chairman of the House Science Committee Congressman Robert Smith Walker, former Deputy Secretary of Defense Robert O. Work, former Secretary of the Air Force and Director of the National Reconnaissance Office Edward C. Aldridge Jr., former Chiefs of Staff of the Air Force Generals Larry D. Welch and Ronald Fogleman, former Commander of Strategic Command Admiral James O. Ellis, former Vice Chiefs of Staff of the Air Force Generals Thomas S. Moorman Jr. and Lester Lyles, former Commander of Air Force Space Command General Lance W. Lord, former Assistant Secretaries of the Air Force Tidal W. McCoy and Sue C. Payton, former Assistant Secretaries of the Air Force for Space and National Reconnaissance Office Directors Martin C. Faga, Jeffrey K. Harris, and Keith R. Hall, Assistant Director of the Central Intelligence Agency Charles E. Allen, former National Reconnaissance Office Director Scott F. Large, former Directors of the National Geospatial-Intelligence Agency Letitia Long, Robert Cardillo, and Vice Admiral Robert B. Murrett, former Deputy Undersecretaries of Defense for Space Policy Marc Berkowitz and Douglas Loverro, former Commander of the Space and Missile Systems Center Brian A. Arnold, former Director of the Defense Intelligence Agency Ronald L. Burgess Jr., former Deputy Undersecretary of the Air Force for Space and astronaut Gary Payton, Deputy Director of the National Reconnaissance Office and Principal Deputy Assistant Secretary of the Air Force for Space David Kier, former Air Force astronaut Colonel Pamela Melroy. The list also includes the former Deputy Commander of U.S. Space Command, and the former Deputy Commander of U.S. Cyber Command, and the Chairman of the Allard Commission on National Security Space.
Legislative provisions for the Space Force were included in the 2020 National Defense Authorization Act, which was signed into law on 20 December 2019. The Space Force was established as the sixth armed service branch, with Air Force General John "Jay" Raymond, the commander of Air Force Space Command and U.S. Space Command, becoming the first chief of space operations. On 14 January 2020, Raymond was officially sworn in as chief of space operations by Vice President Mike Pence.
Raymond era
On 20 December 2019, Air Force Space Command was redesignated as the United States Space Force and its commander, General John W. Raymond was sworn in as its first chief of space operations. On 20 December, its first organizational change occurred when Secretary of the Air Force Barbara Barrett redesignated Air Force Space Command's Fourteenth Air Force as Space Operations Command. All of Air Force Space Command's 16,000 active duty and civilian personnel were assigned to the new service.
Major organizational changes during the first year included replacing its space wings and operations groups with deltas and garrisons on 24 July 2020 and announcing its field command structure, merging wings and groups into deltas and numbered air forces and major commands into field commands. The Space Force announced that its field commands would be Space Operations Command, Space Systems Command, and Space Training and Readiness Command (STARCOM). Space Training and Readiness Delta (Provisional) absorbed former Air Education and Training Command and Air Combat Command space units, preparing for the activation of STARCOM. Space Delta 2 became the space domain awareness delta, replacing the 21st Operations Group; Space Delta 3 became the space electronic warfare delta, replacing the 721st Operations Group; Space Delta 4 became the missile warning delta, replacing 460th Operations Group and absorbing the ground-based missile warning radars of the 21st Operations Group; Space Delta 5 became the command and control delta, replacing the 614th Air Operations Center; Space Delta 6 became the cyberspace operations delta, replacing the 50th Network Operations Group; Space Delta 7 became the intelligence, surveillance, and reconnaissance delta, replacing Air Combat Command's 544th Intelligence, Surveillance and Reconnaissance Group; Space Delta 8 became the satellite communication and navigation warfare delta, replacing the 50th Operations Group; Space Delta 9 became the orbital warfare delta, replacing the 750th Operations Group; the Peterson-Schriever Garrison became responsible for the base administration of Peterson Air Force Base, Schriever Air Force Base, Cheyenne Mountain Air Force Station, Thule Air Base, New Boston Air Force Station, and Kaena Point Satellite Tracking Station, replacing the 21st Space Wing and the 50th Space Wing; the Buckley Garrison became responsible for the base administration of Buckley Air Force Base, Cape Cod Air Force Station, Cavalier Air Force Station, and Clear Air Force Station, replacing the 460th Space Wing. On 21 October 2020, Space Operations Command was established as its first field command, replacing headquarters Air Force Space Command. The first Space Operations Command (redesignated Fourteenth Air Force) was redesignated as Space Operations Command-West and its air and space lineage was split between the Air Force and the Space Force.
On 3 April 2020, Chief Master Sergeant Roger A. Towberman, formerly command chief of Air Force Space Command, transferred to the Space Force as the Senior Enlisted Advisor of the Space Force, becoming its second member and first enlisted member. On 18 April 2020, 86 graduates of the United States Air Force Academy became the first group of commissioned second lieutenants in the U.S. Space Force, becoming the 3rd to 88th members of the Space Force. On 16 July 2020, the Space Force selected 2,410 space operations officers and enlisted space systems operators to transfer to the Space Force, with the first back recommissioning or reenlisting on 1 September. The Space Force swore in its first 7 enlisted recruits on 20 October 2020, graduating basic military training on 10 December 2020 and its first Officer Training School candidates commissioned on 16 October. The Space Force also commissioned its first astronaut, with Colonel Michael S. Hopkins, the commander of SpaceX Crew-1, swearing into the Space Force from the International Space Station on 18 December 2020.
During the first year major symbols were also unveiled, with the Seal of the United States Space Force approved on 15 January 2020 and was revealed on 24 January 2020, the flag of the United States Space Force debuted at signing ceremony for the 2020 Armed Forces Day proclamation on 15 May 2020, the Space Force Delta symbol and motto of Semper Supra released on 22 July 2020, and the official service title of Guardian announced on 18 December 2020. The first Air Force instillations were renamed to Space Force instillations on 9 December 2020, with Patrick Air Force Base and Cape Canaveral Air Force Station renamed as Patrick Space Force Base and Cape Canaveral Space Force Station.
In September 2020, the Space Force and NASA signed a memorandum of understanding formally acknowledging the joint role of both agencies. This new memorandum replaced a similar document signed in 2006 between NASA and Air Force Space Command. On October 20, 2020, the first seven guardians enlisted directly into the Space Force.
The Space Force's first combat operations as a new service included providing early warning of Iranian Islamic Revolutionary Guard Corps Aerospace Force missile strikes against U.S. troops at Al Asad Airbase on 7 January 2020 through the 2nd Space Warning Squadron's Space Based Infrared System. The Space Force also monitored Russian Space Forces spacecraft which had been tailing U.S. government satellites. On October 1, 2021, the first six U.S. Army soldiers, all assigned to Space Operations Command, were inducted into the Space Force at Peterson Space Force Base.
See also
History of spaceflight
Space Race
Space policy of the United States
References
Military history of the United States
United States Space Force
History of the United States by topic
History of spaceflight |
23991095 | https://en.wikipedia.org/wiki/Samorost%202 | Samorost 2 | Samorost 2 is a puzzle point-and-click adventure game developed by Amanita Design. Released for Microsoft Windows, OS X and Linux on 8 December 2005, the game is the second video game title in the Samorost series and the sequel to Samorost. In November 5, 2020 the game received an update with enhanced visuals, brought fullscreen support & replaced level codes with a level select system. This version also received IOS & Android ports.
Plot
The game starts when Aliens land at Gnome's house and steal his berries. They are interrupted by Gnome's dog who is then kidnapped by Aliens. Gnome spots aliens leaving with his dog and sets up to save it. He lands with his Polokonzerva airship on alien's planet. He manages to infiltrate alien's underground base and finds the dog who is held by alien's leader for joy. Gnome saves the dog and together they escape the planet but their airship crashes on another planet. They venture through the planet and eventually find taxi driver and convince him to take them home. Driver then sits with Gnome and dog at a bonfire before departing while Gnome lies at Cherry trees falling asleep.
Gameplay
Gameplay is similar to the previous game. The player interacts with the world with a simple point and click interface directing a small, white-clad humanoid with a little cap and brown boots (called simply "gnome" by Dvorsky). The goal of the Samorost games is to solve a series of puzzles and brain teasers. The puzzles are sequentially linked forming an adventure story. The game contains no inventory or dialogue, and the solving of puzzles mainly consists of clicking on-screen elements in the correct order. Solving a puzzle will immediately transport the player character to the next screen.
The game features surrealistic, organic scenarios that mix natural and technological concepts (often featuring manipulated photographs of small objects made to look very large), creative character designs and a unique musical atmosphere.
Development
Together with his freelance Flash and web design agency, Amanita Design, Dvorský produced a sequel, Samorost 2. In Samorost 2, the gnome goes on a longer quest to save his kidnapped dog and return safely home. The first chapter of this game (comprising the first 4 levels) can be played online for free. The second chapter (with 3 levels) is only playable in the full version, which cost $6.90 until the price was later lowered to $5.00.
Soundtrack
The Samorost 2 Soundtrack is composed, written, arranged and produced by Tomáš Dvořák for Amanita Design, the soundtrack was released on December 8, 2005 in digital format and on June 5, 2006 on cd. The album includes 12 tracks.
Tomáš Dvořák - composer, writer, arranger and producer;
Jakub Dvorský, Tomáš Dvořák - cover design;
Jakub Dvorský - artwork.
It was recorded, mixed and mastered by Tomáš Dvořák at Mush Room Prague / Budapest. It is packaged in digipak. The CD includes the PC and Mac version of Samorost 2 by Amanita Design. The Samorost 2 Soundtrack won the Original Sound category at the Flashforward Film Festival 2006.
Release
Samorost 2 was released in several physical and digital formats.
Physical:
International release - includes a CD with the Win and Mac versions of the game and the soundtrack (CD-DA and mp3 files). It is packaged in a digipak. It was released on 5 June 2006.
Collector's Edition (Russian) - includes a CD with a Russian version of the game, a CD with the soundtrack and artbook with some sketches, informations, etc. It is packaged in a DVD-box. It was released on 21 November 2008.
Digital:
Amanita Design Store - includes Win, Mac and Linux versions of the game and soundtrack in mp3 format. It was released on 8 December 2005.
Steam - includes the Win version. It was released on 11 December 2009.
App Store (macOS) - includes the Mac version. It was released on 28 April 2011.
Samorost 2 has been featured in the Humble Indie Bundle which took place from 4 May 2010 - 11 May 2010 which was subsequently extended to the 15th and raised $1,273,613. Of this, contributors chose to allocate 30.90% to the charity: $392,953 for the Electronic Frontier Foundation and Child's Play Charity.
Reception
Critical reviews were generally positive. The game was praised for its visuals and simple gameplay.
Awards
Samorost 2 has won several awards. These include a 2007 Webby in the games category; Independent Games Festival award in 2007 for Best Web Browser Game; Best Web-Work Award at the Seoul Net Festival in 2006. The soundtrack won the Original Sound category at the Flashforward Film Festival 2006. The game was also nominated in 3 categories (Best Game, Best New Character and Best Visual Effects) in the GameShadow Innovation in Games Awards.
In 2011, Adventure Gamers named Samorost 2 the 54th-best adventure game ever released.
Sequel
The sequel Samorost 3 has been released in 2016 on multiple platforms (PC / Android /IOS) .
References
External links
Official website
2005 video games
Adventure games
Amanita Design games
Art games
Flash games
Independent Games Festival winners
Indie video games
Linux games
MacOS games
Point-and-click adventure games
Video games about dogs
Video games developed in the Czech Republic
Video games set on fictional planets
Webby Award winners
Windows games
Single-player video games |
1991219 | https://en.wikipedia.org/wiki/JFFS2 | JFFS2 | Journalling Flash File System version 2 or JFFS2 is a log-structured file system for use with flash memory devices. It is the successor to JFFS. JFFS2 has been included into the Linux kernel since September 23, 2001, when it was merged into the Linux kernel mainline as part of the kernel version 2.4.10 release. JFFS2 is also available for a few bootloaders, like Das U-Boot, Open Firmware, the eCos RTOS, the RTEMS RTOS, and the RedBoot. Most prominent usage of the JFFS2 comes from OpenWrt.
At least three file systems have been developed as JFFS2 replacements: LogFS, UBIFS, and YAFFS.
Features
JFFS2 introduced:
Support for NAND flash devices. This involved a considerable amount of work as NAND devices have a sequential I/O interface and cannot be memory-mapped for reading.
Hard links. This was not possible in JFFS because of limitations in the on-disk format.
Compression. Four algorithms are available: zlib, rubin, rtime, and lzo.
Better performance. JFFS treated the disk as a purely circular log. This generated a great deal of unnecessary I/O. The garbage collection algorithm in JFFS2 makes this mostly unnecessary.
Design
As with JFFS, changes to files and directories are "logged" to flash in nodes, of which there are two types:
inodes: a header with file metadata, followed by a payload of file data (if any). Compressed payloads are limited to one page.
dirent nodes: directory entries each holding a name and an inode number. Hard links are represented as different names with the same inode number. The special inode number 0 represents an unlink.
As with JFFS, nodes start out as valid when they are created, and become obsolete when a newer version has been created elsewhere.
Unlike JFFS, however, there is no circular log. Instead, JFFS2 deals in blocks, a unit the same size as the erase segment of the flash medium. Blocks are filled, one at a time, with nodes from bottom up. A clean block is one that contains only valid nodes. A dirty block contains at least one obsolete node. A free block contains no nodes.
The garbage collector runs in the background, turning dirty blocks into free blocks. It does this by copying valid nodes to a new block and skipping obsolete ones. That done, it erases the dirty block and tags it with a special marker designating it as a free block (to prevent confusion if power is lost during an erase operation).
To make wear-levelling more even and prevent erasures from being too concentrated on mostly-static file systems, the garbage collector will occasionally also consume clean blocks.
Disadvantages
Due to its log-structured design, JFFS2's disadvantages include the following:
All nodes must still be scanned at mount time. This is slow and is becoming an increasingly serious problem as flash devices scale upward into the gigabyte range. To overcome this issue, the Erase Block Summary (EBS) was introduced in version 2.6.15 of the Linux kernel. EBS is placed at the end of each block and updated upon each write to the block, summarizing the block's content; during mounts, EBS is read instead of scanning whole blocks.
Writing many small blocks of data can even lead to negative compression rates, so it is essential for applications to use large write buffers.
There is no practical way to tell how much usable free space is left on a device since this depends both on how well additional data can be compressed, and the writing sequence.
See also
List of file systems
ZFS
Btrfs
NILFS
F2FS
External links
Red Hat JFFS2 site
JFFS: The Journalling Flash File System by David Woodhouse (PDF)
JFFS2 official mailing list
JFFS2 FAQ
References
Disk file systems
Embedded Linux
Flash file systems supported by the Linux kernel
Compression file systems
Computer-related introductions in 2001 |
44251626 | https://en.wikipedia.org/wiki/Vector%20NTI | Vector NTI | Vector NTI is a commercial bioinformatics software package used by many life scientists to work, among other things, with nucleic acids and proteins in silico. It allows researchers to, for example, plan a DNA cloning experiment on the computer before actually performing it in the lab.
It was originally created by InforMax Inc, North Bethesda, MD. Initially released for free, it was locked and turned into a commercial software after 2008 which created problems for locked in users who were forced to buy the software to continue accessing their data on newer computers.
What was previously a single software package has been split into Vector NTI Express, Advanced, and Express Designer.
Vector NTI has been discontinued by its corporate parent Thermo Fisher. Support will cease as of December 31, 2020.
Features
create, annotate, analyse, and share DNA/protein sequences
perform and save BLAST searches
design primers for PCR, cloning, sequencing or hybridisation experiments
plan cloning and run gels in silico
align multiple protein or DNA sequences
search NCBI’s Entrez, view, and save DNAs, proteins, and citations
edit chromatogram data, assemble into contigs
See also
Bioinformatics
Cloning vector
Computational biology
Expression vector
List of open source bioinformatics software
Restriction map
Vector (molecular biology)
Vector DNA
References
External links
Description of software
Vector NTI homepage at Invitrogen.com
Vector NTI at openwetware.org
Vector NTI v10 (only PC)
Tutorials
Vector NTI tutorial at NorthWestern.edu
Other
description of Vector NTI Viewer
Bioinformatics software |
59612187 | https://en.wikipedia.org/wiki/FloQast | FloQast | FloQast is an accounting software vendor based in Los Angeles, California. Founded in 2013, the company provides close management software for corporate accounting departments to help them improve the way they close the books each month.
Since launching, the company is regularly mentioned by industry publications as one of the best-reviewed employers in the Los Angeles tech community.
In 2017, the company was named by the Los Angeles Business Journal as one of the best small companies to work at in Los Angeles. A year later, FloQast was recognized as one of the 15 best medium-sized companies to work for in Los Angeles.
In January 2019, the company ranked #11 on Built In LA's 100 Best Places to Work in Los Angeles.
The company announced the opening of a second office in February 2017, expanding to the Midwest with a new office and team in Columbus, Ohio.
History
FloQast was founded by CPAs and former corporate accountants Mike Whitmire and Chris Sluty, along with veteran software engineer Cullen Zandstra. Whitmire first conceived the idea for the company during his time at Cornerstone OnDemand where he was a senior accountant. Having experienced firsthand the challenges accounting teams face when closing the books each month — from inefficient procedures to outdated workflows and flawed organizational structures —Whitmire aimed to create a product that would support accounting and finance teams during the hectic financial close process.
After devising the concept, Whitmire recruited Zandstra as Co-founder and CTO. The name FloQast was created with the help of a word generator using a combination of accounting terms and contemporary vernacular words. The two developed a minimum viable product (MVP), and were accepted into the prestigious Amplify.LA accelerator program. Following the company’s initial funding, Whitmire recruited his former Syracuse University classmate, Sluty, to join the team as Co-founder/COO/Head of Customer Success.
In November 2014, the company raised a $1.3 million seed round of funding led by Amplify.LA and Toba Capital, before securing a $6.5 million Series A in early 2016. Following a period of growth in which the company tripled its revenues, FloQast raised a $25 million Series B in June 2017. Six months after the funding, the company relocated to a new 20,000-square-foot office in Sherman Oaks to accommodate a staff that had grown 250 percent in 2017 alone.
In May 2020, Tipalti, the world's leading automation network for payables, establishes new partnership with FloQast, Affise, and Myers-Holum.
Product
FloQast is a Software as a Service (SaaS) application. The financial close management software works with Microsoft Excel and uses process management, reporting, and collaboration to automate the month-end closing of an organization’s financial books. The product provides accounting teams with checklists and tie-outs linked to Excel workbooks and the client of organizations’ enterprise resources planning (ERP) system to automate reconciliations and, ultimately, shorten the financial close process. In 2019, the company introduced FloQast AutoRec, a tool that relies on artificial intelligence to help automate the reconciliation process.
FloQast operates with cloud document storage systems Box, Google Drive, Dropbox, Microsoft OneDrive, and Egnyte, as well as NetSuite and Sage Intacct financial applications. It is compatible with ERP/General Ledger systems, including Oracle, SAP, Microsoft (Dynamics GP, Dynamics NAV), Sage, Epicor, QuickBooks, Financialforce, and others.
Recognition
In 2018, the company was recognized for:
50 Startups to Watch in LA
#5 on Top 25 in B2B LA Tech
CFO's Tech Companies to Watch 2018: Major Disruptors
In 2017, the company received multiple Stevie Awards for:
Gold - Company of the Year, Computer Software (Up to 100 employees)
Silver - Most Innovative Tech Company of the Year - Computer Services and Software (Up to 100 employees)
Silver - New Product or Service of the Year, Financial Software
In 2017, the company received an award by G2 Crowd for Best Software for Finance Teams.
In 2016, the company received the following two awards:
Stevie Award (Silver) - Fastest Growing Private Companies, Technology (Up to 100 employees)
G2Crowd - High Performer
References
2013 establishments in California
Software companies based in California
Software companies of the United States |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.